Hacker Newsnew | past | comments | ask | show | jobs | submit | procarch2019's commentslogin

They could just switch their dns back to auto (or statically use google/cloudflare/etc depending on how you configure it), no? Then fix it when you’re back.

You could also set up 2 ssids depending on your WiFi set up. Point one to pi hole and the other to a different DNS provider. Instruction if pi hole breaks is just switch WiFi.


As an OT systems architect I am totally floored. We design and plan for systems lifecycle on a ~20yr scale, with OT hardware (not the controls hardware, that’s closer to 10-20) lifecycle much shorter (~5 yr). Obvious on Earth we can afford luxuries of adopting new things, which actually shortens a total system lifecycle since new tech drives new designs.

I wish (and don’t) I could work on something that had a dependency of “design it once because it’s relatively inaccessible after its go live.” I’ll def check out the documentary.


Video games used to be like this. Once you built the "gold master" CD/DVD/cartridge/etc it was out of your hands. It was kinda nice to have a concrete end to the project [1]. Nowadays, everything is on the 'net, you can send patches, dlc, etc and the notion of a game being "done" is murky.

[1] There was, however, one game I worked on where they had to pull the boxes from stores (delivered, but not yet for sale) and swap out the disk in order to release a critical fix that was discovered too late. Fun times (:


Which resulted in the notorious release of Outpost [1]. I think owners of that would have happily accepted a large series of post release patches over that.

[1] https://en.wikipedia.org/wiki/Outpost_(1994_video_game)#Rece...


Especially when it was cartridge games. I remember when PC games started to get updates and you'd wait for next month's cover disc to get them. I seem to remember Frontier Elite having about a dozen...

I just checked the one commercial game I developed and there are two patches I can see released by Eidos for it.


I'm curious, what was the bug that was so critical the publisher decided it was best to perform such a (what I assume was) costly operation post-distribution?


I'm aware of one game that the company I worked for made that nearly released and that would have broken every GameCube that played it, Nintendo had to pull 50k discs from distribution just before they were sent to retailers and destroy them.

The issue was that one programmer used an unauthorized system call to make the disc drive spin twice as fast, as they thought it was a great way to resolve some of the data streaming issues the game had. And yeah it worked - but after few hours of playing it would kill the GameCube. It wasn't really noticed because no one tests the game on actual discs right until actual gold master is made(usually), and then when the devkits died it was considered a random hardware fault and Nintendo just replaced them.


Ah, the HCF system call?


Honestly it was just before my time at that studio so I don't know exactly how that was done, but everyone knew about it because it cost us a lot of money and damaged our relationship with Nintendo somewhat. The game actually went on to be pretty successful after that, but yeah, would have been a disaster.


It was a crash bug, but I'm not really sure the details (and it has been some years...). Even at the time, I wasn't personally involved in it, just heard about it through the grapevine.

But yes, my understanding is it was quite expensive and the publisher was none too pleased (:


The cynical in me thinks that probably it was bug in the anti-piracy code.


Agreed. Due to the long lifecycle of manufacturing equipment we still see a lot of 100mb out there, and it’s not even embedded.

I would note that all new products seem to be gbe or better.


I actually rebound my windows keys to use this instead of default windows snippet tool. Sure, if I need a high level of editing I’ll bring it into some other program (still using flame shot to take a capture). 95% the built in arrows, boxes, numbers, etc do the quick attention calling I need.

Bonus, this was a piece of ‘bloatware’ an admin rebuilt my computer with, but I came to love.


I mean, I guess I get it for certain things. Silly to mandate it though. Anyone who has work requiring them to be in the office should be in the office. My in office time jumps up to 3-5 days per week when I have a new employee until they’re somewhat self sufficient. My in office time jumps when I have to prepare hardware for a project *in house*. My in office time plummets when I’m doing a software project hosted on one of my customers systems.

I guess what I’m saying is just be pragmatic and get your stuff done.

P.S. One of my senior guys is a road warrior, highly effective and I see him in the office maybe three times a year.


Yes, in the automation world living on level 2/2.5 wordpad is great for doing things like taking screenshots too. In a GMP facility it could take some significant effort to get a screenshot tool or a rich text editor installed. Also, taking pictures with a phone is usually a huge no no.

Occasionally, you have to document something with…screenshots. Now, some versions of Windows Server come with snipping tool, which lets you save a screen shot. Others you still have to use the print screen key. Then what… well, you open wordpad and paste.

Luckily most of the systems we interact with now we get to install Office when the system is being qualified. Could be a huge PIA though otherwise.

edit: level 2/2.5 means no internet connection and therefore no browser based editors.


Can't you paste into Paint for the screenshots?


Yes, one by one. It’s just tedious. There’s always a solution.

Often time screenshots correlate to steps. You open wordpad and start a numbered list. Write what action you perform and the screenshot. Repeat until you’re all the way done. Take it off the system, print it, sign and date it and attach to an MOC and your done.

I supposed you could copy and save in paint. Then do the same thing in Word off the system by dropping in all the images. Idk, doesn’t sound like that much more work, but it’s annoying enough.


I work in the OT/automation space for pharma. Cell and gene therapy is crazy amazing.

One thing I didn’t really see mentioned here is length of treatment. They have to collect patients sample at hospital or other facility, transport to production facility, grow sample, inject sample with vector, grow some more and then reintroduce. The real sci-fi moment will be when they get to the point they can treat in a day.

I think most are 3+ weeks sample to treatment right now. Still amazing though.


The 'vein-to-vein' time (apheresis to infusion) is being shortened all the time, and you can already manufacture CAR-T cells in 24-48hrs, but it still takes about a week to clear quality assurance and release the product (microbiology, integrated copy number, integration site testing, autonomous growth potential etc). The reason being mostly that you don't want to accidentally replace patient's B cell cancer with a T cell cancer of your own making.


Hah, you don’t have to tell me about validation and quality. Yes, they usually account for a significant time of anything in the pharma world.


> The real sci-fi moment will be when they get to the point they can treat in a day.

How do you imagine that would work?

Some closed-pipeline machine that lives in the hospital and automates sample → modified sample culture cycle?

Or something stranger, e.g. some kind of injectable (maybe prokaryotic?) cells that actively swim around looking for L-lymphocytes to vectorize through bacterial horizontal gene transfer — such that the whole process happens in vivo?


> Some closed-pipeline machine that lives in the hospital and automates sample → modified sample culture cycle?

That's already a thing, for example Lonza's Cocoon platform or Miltenyi's Prodigy (both are often used for on-site manufacturing).


would you mind sharing what you think are the largest bottlenecks to (1) 24-hr turnaround and (2) CAR-T therapy for < $10K?


I think that's near impossible to achieve with an autologous product. To meet those two goals I think you'd need to have an allogenic product manufactured and QCd in advance in a large batch, which you could thaw and infuse on demand. The problem with that approach is histocompatibility (HLA) matching, but several companies are working on that.


thanks for sharing. this is such an impactful space. hopefully someone one day can lower the cost to $10K or ideally $1K.


Considering leukemia treatment today is 2 years for girls and 3 years foe boys, 3weeks is several magnitudes worth of improvement.


You're quoting the length of Course V Maintenance in the CALGB 10403 study (or similar pediatric COG studies on which that was based), which is misleading in that it is (a) applicable only to ALL (b) patients under 40 y (c) only the final of five courses of treatment.

That being said, you're directionally correct and certainly, shorter duration immunotherapies represent a sea change.


The treatment itself is a one-day procedure (although the patients will be generally kept under observation for CRS for a few days and then regularly monitored for response), the 3 weeks is the time needed for manufacturing/QC. In that time patients receive bridging chemotherapy until the CAR-T product is ready and unfortunately some do not make it.


There are a number of companies working on 'in vivo' deliveries for CARs. Oftentimes using the same tools as proven out by the Moderna vaccine.


Yes, but that way you lose control over the dose, and to an extent over CAR-T characteristics. CAR-T therapy is usually used in patients who already had multiple rounds of chemo and their immune cells are generally not in a great shape. Even with 'traditional' CARs you occasionally get manufacturing failures since the cells are too exhausted to expand in vitro or have already lost their effector functions.


Occasionally I find myself somewhere I need to access email but have no network connection. No local cache means a big ol’ screw me


I think anyone using outlook is probably using it out of necessity for work.

Not everyone can just jump to Linux when they work in a company.


I'm using Thunderbird on Linux with an Outlook work account. Granted, I have to pay for 'Owl for Exchange' for it work, but I absolutely hate the Outlook program, I'm willing to fork out the $10/yr of my own money just to avoid it.


If you still have IMAP access, thunderbird supports OAuth2 for connecting to O365 IMAP. tbsync for calendar access. Seems to work pretty well currently.


That is my setup. Works 100%.


No more IMAP


Need MS for other toolings


The web version of Outlook probably works well on other OSes.


As pointed out elsewhere. My comment wasn’t really on outlook itself. There are other things that bind me to MS ecosystem.

I’m sure that’s true for a lot of people.


does on my linux mint, no problem (for now...)


Do you not have different machines for work and personal use?


I don’t do more work outside of my professional work.

I get away with just my phone and an iPad for everything else.


Doesn't Outlook have a web version?


It does, but the applications and environments I need to use are Windows only anyway.

IT policy also doesn’t let us have anything other than Windows. I could skirt it for the day to day stuff, but I wouldn’t want to maintain my setup in multiple places anyway


Thunderbird works pretty well with Office365 accounts.


I’ll have to check it out, but sometimes I think I should just lean in to all the MS integrations.

I do like OneNote and use the ‘Send to OneNote’ for meeting notes all the time.


I still use the mobile site and am convinced that the piss poor design of that is dark patterns to get people to give up and use the app.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: