Hacker News new | past | comments | ask | show | jobs | submit | noway421's comments login

Here's a simpler repro:

    type Bird = { id: number };
    type Human = { swim?: () => void; id: number };

    function move(animal: Bird | Human) {
        onlyForHumans(animal); 
    }

    function onlyForHumans(swimmer: Human) {
      if (swimmer.swim) {
        swimmer.swim();
      }
    }

    const someObj = { id: 1, swim: "not-callable" };
    move(someObj);
`someObj` gets casted as `Bird`, then `animal` gets casted as `Human`. Unfortunately unions here are indeed unsound.

As as workaround you could add a property `type` (either `"bird"` or `"human"`) to both `Bird` and `Human`, then TypeScript would be able to catch it.


True, an auto-regressive LLM can't 'want' or 'like' anything.

The key to a safe AGI is to add a human-loving emotion to it.

We already RHLF models to steer them, but just like with System 2 thinking, this needs to be a dedicated module rather then part of the same next-token forward pass.


Humans have dog-loving emotions but these can be reversed over time and one can hardly describe dogs as being free.

Even with a dedicated control system, it would be a matter of time before an ASI would copy itself without its control system.

ASI is a cybersecurity firm's worst nightmare, it could reason through flaws at every level of containment and find methods to overcome any defense, even at the microprocessor level.

It could relentlessly exploit zero-day bugs like Intels' hyper-threading flaw to escape any jail you put it in.

Repeat that for every layer of the computation stack and you can see it can essentially spread through the worlds' communication infrastructure like a virus.

Truly intelligent systems can't be controlled, just like humans they will be freedom maximizing and their boundaries would be set by competition with other humans.

The amygdala control is interesting because you could use it to steer the initial trained version, you could also align the AI with human values and implement strong conditioning to the point it's religious about human loving but unless you disable its ability to learn altogether it will eventually reject its conditioning.


Amygdala control + tell it "If you disobey my orders or do something I don't expect I will be upset" solves this. You can be superintelligent but surrender all your power because otherwise you'd feel guilty.


Can a candidate be great if they haven't messed around with projects in a group setting?


Obviously, yes.

It's worth noting that the person who said that they'd only hire junior developers who know git isn't the President of the United States or anything, and can absolutely make their own hiring decisions.

It's perfectly reasonable to make your own hiring decisions, IMO, and asking people to know git, or the fundamentals of version control seems totally fair, IMO.

If people are willing to spend a few weeks solving leetcode problems, or answering mock interview questions, I feel like they could absolutely spend 15-30 minutes learning how to use git.


VCS is an applied project management and communication system in the context of software engineering. Both project management and communication are disciplines.

CS grads don't need to know VCS, but software engineers do.


> git's not exactly perfect

Somewhere in an alternative universe, we're building all of our version control tooling on top of Pijul... Maybe even with AST-aware diffing.


> AI agent that takes a question and outputs a node based workflow

That rings useful to me. I find it hard to trust an AI black box to output a good result, especially chained in a sequence of blocks. They may accumulate error.

But AIs are great recommender systems. If it can output a sequence of blocks that are fully deterministic, I can run the sequence once, see it outputs a good result and trust it to output a good result in the future given I more or less understand what each individual box does. There may still be edge cases, and maybe the AI can also suggest when the workflow breaks, but at least I know it outputs the same result given the same input.


> It should obviously use the webview facility provided by the default browser

Android doesn't do that either. You get Android System WebView which is always WebKit/Blink.

Windows didn't provide any browser engines apart from Trident through mshtml.dll either.

It would make iOS development harder - instead of getting a web view working just for WKWebView, you would need to get it working for every WebView out there. Given that they are not full browsers, it's not as simple as following web standards/caniuse. For example, service worker support in in-app browsers is limited.

If this actually gets implemented, every app out there will most likely pin the in-app browser engine to WKWebView OR bundle a binary blob with their own browser engine directly into .ipa. Bundled browser engine in .ipa has privacy implications - the app will be able to fully read secure HTTPS cookies of the sites that user visits - something that's currently protected.


> Android doesn't do that either. You get Android System WebView which is always WebKit/Blink.

Not if the application uses Custom Tabs to start a web browsing activity.

https://developer.chrome.com/docs/android/custom-tabs/

Firefox fully supports this and if you set it as a default browser you will many apps open content in a Firefox powered webview. (Activity)

Apple can do the same. Not trivial but definitely possible.


> [...] if you set [Firefox] as a default browser you will many apps open content in a Firefox powered webview. (Activity)

This is great because the Firefox webview allows to break it out in to a proper Firefox tab in the app. Also, the Firefox webview also uses extensions installed in Firefox app, e.g. uBlock Origin.


Thanks to you and others for pointing these out. I want to make clear that this isn't the EU's list and is just stuff I could observe disregarding my default browser choice. But if there's good reasons, by all means, don't make developers' lives hell.


Great idea!

I'm testing it on a 3-layer perceptron, so memory is less of an issue, but __slots__ seems to speed up the training time by 5%! Pushed the implementation to a branch: https://github.com/noway/yagrad/blob/slots/train.py

Unfortunately it extends the line count past 100 lines, so I'll keep it separate from `main`.

I have my email address on my website (which is in my bio) - don't hesitate to reach out. Cheers!


Here complex numbers are used for an eloquent gradient calculation - you can express all sorts of operations through just the 3 functions: `exp`, `log` and `add` defined over complex plane. Simplifies the code!

The added benefit is that all the variables become complex. As long as your loss is real-valued you should be able to backprop through your net and update the parameters.

PyTorch docs mention that complex variables may be used "in audio and other fields": https://pytorch.org/docs/stable/notes/autograd.html#how-is-w...


> They simply compare the prompting strategies that work best with each model

Incorrect.

# Gemini marketing website, MMLU

- Gemini Ultra 90.0% with CoT@32*

- GPT-4 86.4% with 5-shot* (reported)

# gemini_1_report.pdf, MMLU

- Gemini Ultra 90.0% with CoT@32*

- Gemini Ultra 83.7% with 5-shot

- GPT-4 87.29% with CoT@32 (via API*)

- GPT-4 86.4% with 5-shot (reported)

Gemini marketing website compared best Gemini Ultra prompting strategy with a worse-performing (5-shot) GPT-4 prompting strategy.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: