Wow, that FTC case gets worse and worse the farther into it you read. What a scumbag company. It's like they opened the "Dark Patterns Unleashed" book and followed every example.
That's exactly right, I've spoken with a ton of folks who have had a good experience with Lucid Link. I think that we are in a slightly different part of the market (in that we aren't targeting video editors, and more of data-intensive applications which may use thousands of IOPS), but I appreciate that the technology is likely similar.
M4 Mac Mini with 16GB RAM is doing a "good enough" job of editing 6k raw footage in Premiere for my team. I'm surprised to say I'm content with the 16GB of ram so far.
Edit: This is in contrast to my M1 Macbook Air with 16GB of ram which would stutter a lot during color grading. So definitely feeling the improvement.
I bought the first MacBook Air M1 with 8GB because it was the only option available in my area. Initially, I had doubts, especially after using notebooks with more than 16GB of RAM in previous years. But I was genuinely surprised by how well the M1 performed. My takeaway is that there’s a lot of room for similar improvements in Linux!
And while I'm broadly satisfied with its performance, I do think that the SSD is probably carrying some of that load. And for a machine that often gets used far longer than a PC, I can't see that being great for longevity.
> And while I'm broadly satisfied with its performance, I do think that the SSD is probably carrying some of that load. And for a machine that often gets used far longer than a PC, I can't see that being great for longevity.
This isn't the early 2010s anymore - SSDs last "long enough" for most people, to the point they are no more consumable than your motherboard or your RAM. (I've actually experienced more RAM failures than SSD failures, but that's an individual opinion here.)
And for the downvoters - do you remember the last time you handed in your Steam Deck, Nintendo Switch, iPhone, or even laptop specifically for a random SSD failure, unrelated to water damage or another external cause? Me neither.
I'm still very happy with my 8GB Air M1 as well. It's incredible how well it still works for a 4 year old entry level laptop. I see all these new M's come out, and I'm sure they're fantastic, but I'm not at all tempted to upgrade.
Yeah, I don’t know why 8gb base models get so much hate online. 8gb is 64 billion bits of memory. If you’re writing everyday software and you need more memory than that, you’re almost certainly doing something wrong.
I also use an 8GB M1. It has firefox with many tabs & windows open in OSX and also a Linux VM in UTM which is running VSCode, vite, and another firefox with lots of tabs. It's performing well! (although swap is currently at 2.3GB, and there's a further 3.5GB of compressed data in RAM)
How much RAM should a few browser tabs and a spreadsheet use? Spreadsheets and webpages were both invented at a time when computers had orders of magnitude less ram than they do today. And yet, Excel and Netscape navigator still worked fine. It seems to me that bigger computers have caused chrome to use more memory.
If 16gb is considered to be a "bare minimum" for RAM, well, how much ram will all those programs use next year? Or in 10 years?
That doesn't help you right now, but 22gb is ridiculous for a few browser tabs and a spreadsheet.
> If 16gb is considered to be a "bare minimum" for RAM, well, how much ram will all those programs use next year? Or in 10 years?
16gb is the figure for the next 10 years. If you see yourself being content with 8gb of memory shared between your CPU and GPU in 2030, you must have a uniquely passive use-case.
I remember when people said 4gb doesn't need to be the minimum for all Macbooks. Eventually MacOS started consuming 4gb of memory with nothing open. Give Apple a few years to be insecure about the whole AI thing and they'll prove to you why they bumped the minimum spec. Trust me.
It’s not just for tabs and spreadsheets, I also have an ide, containers, etc.
I do think the memory footprint of many applications has gotten out of hand, but I am more than willing to spend the extra money not to have to think about it.
This doesn't necessarily mean that your workload would perform unacceptably on an 8GB model. It just means that fewer optional things would be cached in RAM, more RAM pages would be compressed, and there'd be more swap usage.
I'm very grateful to this post for introducing me to sliceutils to create a map from a slice. I think that's a very elegant way to create nested models given a parent and child struct.
If you go by what Zuck says, he calls this out in previous earnings reports and interviews[1]. It mainly boils down to 2 things:
1. Similar to other initiatives (mainly opencompute but also PyTorch, React etc), community improvements help them improve their own infra and helps attract talent.
2. Helping people create better content ultimately improves quality of content on their platforms (Both FoA & RL)
> Zuck: ... And we believe that it’s generally positive to open-source a lot of our infrastructure for a few reasons. One is that we don’t have a cloud business, right? So it’s not like we’re selling access to the infrastructure, so giving it away is fine. And then, when we do give it away, we generally benefit from innovation from the ecosystem, and when other people adopt the stuff, it increases volume and drives down prices.
> Interviewer: Like PyTorch, for example?
> Zuck: When I was talking about driving down prices, I was thinking about stuff like Open Compute, where we open-sourced our server designs, and now the factories that are making those kinds of servers can generate way more of them because other companies like Amazon and others are ordering the same designs, that drives down the price for everyone, which is good.
Disclaimer: I do not work at Meta, but I work at a large tech company which competes with them. I don't work in AI, although if my VP asks don't tell them I said that or they might lay me off.
Multiple of their major competitors/other large tech companies are trying to monetize LLMs. OpenAI maneuvering an early lead into a dominant position would be another potential major competitor. If releasing these models slows or hurts them that is in and itself a benefit.
What benefit is there to grabbing market share from your competitors... in a business you don't even want to be in?
By that logic you could justify any bizarre business decision. Should Google launch a social network, to hurt their competitor Facebook? Should Facebook, Amazon and Microsoft each launch a phone?
I enjoyed using Google plus more than any other social network, and managed to create new connections and/or have standard, authentic, real conversations with people I didn't know, most of them ordinary people with shared interests that I would probably wouldn't meet otherwise, some of them are people I can't believe I could connect with directly in any other way - newspapers and news sites editors, major SDK developers. And even with Kevin Kelly.
Who says they don't want to be in the market? Facebook has one product. Their income is entirely determined by ads on social media. That's a perilous position subject to being disrupted. Meta desperately wants to diversify its product offerings - that's why they've been throwing so much at VR.
I imagine their goal is to simultaneously show that Meta is still SotA when it comes to AI and at the same time feed a community of people who will work for free to essentially undermine OpenAI's competitive advantage and make life worse for Google since at the very least LLMs tend to be a better search engine for most topics.
There's far more risk if Meta were to try to directly compete with OpenAI and Microsoft on this. They'd have to manage the infra, work to acquire customers, etc, etc on top of building these massive models. If it's not a space they really want to be in, it's a space they can easily disrupt.
Meta's late game realization was that Google owned the web via search and Apple took over a lot of the mobile space with their walled garden. I suspect Meta's view now is that it's much easier to just prevent something like this from happening with AI early on.
Their goal is to counter the competition. You rarely should pick the exact same strategy as your competitor and count on out gunning them, rather you should counter them. OpenAI is ironically closed, well meta will be open then. If you can't beat them, you should try to degrade down the competitors value case.
Could also be that these smaller models are a loss leader or advertisement for a future product or service... like a big brother to Llama3 that's commercial.
Devil's advocate: they have to build it anyway for Meta verse and in general. Management has no interest in going into cloud business. They had Parse long time back but that is done. So why not to release it. They are getting goodwill/mindshare, may set up industry standard and get community benefit. It isn't very different from React, Torch etc.
Commoditizing your complement. If all your competitors need a key technology to get ahead, you make a cheap/free version of it so that they can't use it as a competitive advantage.
The complement being the metaverse. You can’t handcraft the metaverse because it would be infeasible. If LLMs are a commodity that everyone has access to, then it can be done on the cheap.
Put another way - if OpenAI were the only game in town how much would they be charging for their product? They’re competing on price because competitors exist. Now imagine the price if a hypothetical high quality open source model existed that can customers can use for “free”.
That’s the future Meta wants. They weren’t getting rich selling shovels like cloud providers are, they want everyone digging. And everyone digs when the shovels are free.
If you want to employ the top ML researchers, you have to give them what they want, which is often the ability to share their discoveries with the world. Making Llama-N open may not be Zuckerberg‘s preference. It’s possible the researchers demanded it.
Meta basically got a ton of free R&D that directly applies to their model architecture. Their next generation AIs will always benefit from the techniques/processes developed by the clever researchers and hobbyists out there.
They were going to make most this anyway for Instagram filters, chat stickers, internal coding tools, VR world generation, content moderation, etc. Might as well do a little bit extra work to open source it since it doesn't really compete with anything Meta is selling.
I would guess mindshare in a crowded field, ie discussion threads just like this one that help with recruiting and tech reputation after a bummer ~8 years. (It's best not to overestimate the complexity/# of layers in a bigco's strategic thinking.)
Fun fact about Dave App's tipping. If you bring the value to zero you saw an animation of a kid's food being taken away from them.
https://www.ftc.gov/news-events/news/press-releases/2024/11/...