You have it right there. There is a large set of applications that does not require accuracy. These applications will benefit hugely from LLMs. Examples include recommender systems, knowledge extraction, transformation, etc. all in domains where accuracy does not matter too much.
In particular, instruct aligned LLMs makes it very easy to integrate ad hoc machine learning into an application.
This is really nice to see! I asked about that feature almost 2 years ago as we wanted to use Supabase for everything. Unfortunately there were no plans back then to support it, so we had to use another provider for object storage.
you have many ways you can solve the issues without ristricting free speach.
you can ristrict how you can monitize a product - I think the problem would be much smaller if you have to pay a price congruent to the value you get. Only a few people would pay for Facebook.
you can make the platforms resposinsible for what is published on them and enforce that. they would never scale this much.
> And just like the Supreme Court wrote 30 years ago, the answer is the same today: if you don't like these products and feel they are negative, then don't use them.
We have already collectively agreed that this is not an argument. That is why there are agencies like the FDA, etc.
The first year or two were a mess with support for Apple. I remember as late as 2023 to struggle with running certain, modern, tech stacks on the MBP M1.
It is only within the last year that I feel like the ecosystem has entirely cought up.
Obviously MS will benefit from this, if they are able to run, eg., arm64 docker images etc.
I recall people complaining about things like… was it docker? Other dev tools. But programmers are a niche, and a technically competent niche that can both get into trouble and find other solutions. It is essentially the opposite of consumer support.
A consumer is happy as long as they can run Edge on their computer and log in to Facebook. They will not complain about windows on ARM either.
However, we can definitely agree that certain sectors will struggle more with Windows on ARM. My intuition is that a lot of really conservative industries run on windows. Some with a bespoke program for Windows 95 they still keep running on XP.
Chromebooks are essentially that. You need some hacks to get non-Google desktop environments to run, but Chromebooks are how Google brought Linux to mainstream laptops.
Currently I can run PopOS using Parallels, which runs surprisingly well!
What I want is a computer with 64-128GB RAM, a really fast CPU and a coprocessor that does parallel computations (let's call it an AI chip like everybody else) so that I can run ollama / complex Elixir live books etc.
So far the Mac books fit the bill. But I really like PopOS and would really adore to run it on some beefy ARM laptop.
RaspberryPI? If you mean ATX-factor then you can go and buy something like this Ampere Altra.[1] Probably better off putting Asahi Linux on a Mac Studio though, as the CPU upgradability is going to be limited regardless.
You are both technically correct - the best form of correct.
It seems like raspberry pi maxes out at 8GB of RAM, which effectively will not allow me to work. I also do not plan run Google's OS for chromebooks, and it seems like installing other OSs is a hassle on the chromebooks.
In essence I want the specs of a MBP M3 Max with 128GB of ram, just running PopOS.
what reduces my trust is the lack of planning between fiscal and monetary policies. why didn't the government raise salaries of teachers, nurses, etc. when central banks started doing quantitative easing?
Fluctuating interest rates whipe out home owners in the most sought after places - with an upwards of 20x leverage when buying a house and direct exposure to interest rates, some people are really exposed.
move fast and forcefully regarding central bank interest rates seem catastrophical to me.
While I don't know how this will manifest in real products there are some insights to why this development is important. These insights come from the huge successes of the LLMs where we are at.
The insights is the ability to emit knowledge in different dialects. An example is understand legalese. I am not able to do that as I have no degree in law. But LLMs are hugely effective at taking me through a legal filing and understand what I need to put into each field.
The hope is that developments like this will ensure the most efficient interface for a given group of people: Arabic speakers get an interface that take right to left writing into account. Legal people get a language they undrstand and laymen get an interface that caters to them.
I see a lot of critical comments around this. In my opinion, this will turn out to be an accessibility thing (Probably mandated by the EU in some years). Just like it is considered good form to support text to speech software for vision impaired people is it today.
I think it would be productive for you to focus on other sources to learn about AI. You are probably better of looking at what people achieve using LLMs and prompting instead of focusing on the fundamentals of probability theory.
> A system that completely reconfigures itself based on what it predicts that particular might want, based on biased or incomplete data, sounds like a terrible idea.
This sentence shows a quite off understanding of the AI developments for this type of applications. This would be the same as saying that ChatGPT is entirely useless as each character is responds is merely chosen from a probability distribution of the previous characters. While the assessment is technically correct, it diminishes the fact ath it has huge value.
I also see that you have a lot of concerns by trying to predict how this affects other parts of a product (customer support, etc.). While I understand that this type of uncertainty is uncomfortable these things usually fan out alright (or at least profitable).
A comment like this sound like an 70s postman who says that electronic "internet" would never work because people would not be able to communicate with the rigour of handwritten letters.
I am not saying that this article is right or wrong. I merely say that a bit of curiosity would be beneficial when assessing new ideas - that can also be the difference of being in or out of the job market in 4 years.
This comment is reminiscent of the critical receival of Wikipedia.
I spend most of my academic years studying type systems. Unless you work on space crafts or blockchain tech, that level of security is not needed. Modern typesystems is about DX rather than assuring correctness.
It is easy to be pessimistic on new technology. Sometimes it is merited. Oftentimes it is noise and needless rant.
In the framework of GDPR (which I know is not applicable in the US) you have the legal mandate to ask them to correct information about you.
I think that is very reasonable law: If n organisation makes decision about your relations, you should be able to force them to process correct information.
(I have used that clause ones, when a bank wrongly reported that
I had an open account of a certain type, which resultet in that I could not open that type of account at another bank.)
Does it cover information about something about you?
For example the insurance company had the correct address for the person (no need to correct) but the wrong information about the location of the address.
I reckon you can argue that they store and use wrong coordinates about the place you reside, which would indeed be illegal - just like it would be illegal to store a wrong address of where your reside.
You have it right there. There is a large set of applications that does not require accuracy. These applications will benefit hugely from LLMs. Examples include recommender systems, knowledge extraction, transformation, etc. all in domains where accuracy does not matter too much.
In particular, instruct aligned LLMs makes it very easy to integrate ad hoc machine learning into an application.