Up until last week I was running a 960 on mint and had absolutely no problems, nor did I even have to think about drivers. I also have a server running Tesla M10s and they're great too, little more fiddly getting the right driver, but that's moreso on the cards being weird.
Post last week I put in an Arc B580 and I had some issues at the start, but that's more to do with the fact that my workstation has a Haswell Xeon v3... Otherwise it was just turning CSM off.
Without the gnu projects, software would have remained in the domain of universities and industry. Distributing it for free and encapsulating it with an actual legal license was radical in and of itself, but the notion of being required to distribute source was even more radical. Without that, people don't learn to code outside of industry, people don't share ideas and software remains in corporate silos with no/low interoptability unless a business decides to form a strategic partnership.
> outside of industry, people don't share ideas and software remains in corporate silos with no/low interoptability unless a business decides to form a strategic partnership.
Computer science and computing was taught and done at universities long before Stallman and GNU came along. I was using C++ Release E at college before GNU started, provided by Bell Labs at no cost.
Most of that stuff was made available to universities and colleges as institutions, but not to individual students. Once you graduate, you have no effective (or legal) access to it anymore ...
Sure it was free (as in beer) but was it free (as in speech?) Could you modify and improve the compiler? If you did, could you redistribute it? Knowing bell labs, the answer is a definite no to the last one
Even after Sun got a C++ compiler for free for internal use (but not by their customers) by jumping into bed with AT&T, they still hired Michael Tiemann of Cygnus Support to port G++ to Solaris.
You conversely get the same issue if you have no guardrails. Ie: Grok generating CP makes it completely unusable in a professional setting. I don't think this is a solvable problem.
Why does it having the ability to do something has mean it is ‘unusable’ in a professional setting?
Is it generating CP when given benign prompts? Or is it misinterpreting normal prompts and generating CP?
There are a LOT of tools that we use at work that could be used to do horrible things. A knife in a kitchen could be used to kill someone. The camera on our laptop could be used to take pictures of CP. You can write death threats with your Gmail account.
We don’t say knives are unusable in a professional setting because they have the capability to be used in crime. Why does AI having the ability to do something bad mean we can’t use it at all in a professional setting?
This still doesn’t make any sense. If they are worried a dumbass employee would generate CP using the tool, wouldn’t that same employee also download it from the web? Or use a web based tool to generate it?
Any employee that is going to use a corporate AI tool to generate CP is going to use other corporate tools to do worse things. There is no point in worrying about it.
Your boss tells you to choose a vendor for your AI integration. Your options:
Company A - First to the market, Reasonable Cost, Most well known name. Very easy integration.
Company B - Well regarded tools, Higher cost, Better performance and reviews from team. More difficult to integrate
Company C - Reasonably Priced, Performance is reasonable, Has a connection to an extremely controversial individual, Currently being lambasted for being an CP/Revenge Porn generator
Ok, now pretend you're talking to a guy who signs your paycheques. Which one are you NOT gonna pick?
I'm struggling to follow the logic on this. Glocks are used in murders, Proton has been used to transmit serious threats, C has been used to program malware. All can be legitimate tools in professional settings where the users don't use it for illegal stuff. My Leatherman doesn't need to have a tipless blade so I don't stab people because I'm trusted to not stab people.
The only reason I don't use Grok professionally is that I've found it to not be as useful for my problems as other LLMs.
> Ie: Grok generating CP makes it completely unusable in a professional setting
Do you mean it's unusable if you're passing user-provided prompts to Grok, or do you mean you can't even use Grok to let company employees write code or author content? The former seems reasonable, the latter not so much.
What you're describing is still ultimately the "view" layer of a larger autopilot system, that's not what OP is doing. He's getting the text generator to drive the drone. An LLM can handle parsing input, but the wayfinding and driving would (in the real world) be delegated to modern autopilot.
I don't think you understand what an "LLM" is. They're text generators. We've had autopilot since the 1930s that relies on measurable things... like PID loops, direct sensor input. You don't need the "language model" part to run an autopilot, that's just silly.
You see to be talking past him and ignoring what they are actually saying.
LLMs are a higher level construct than PID loops. With things like autopilot I can give the controller a command like 'Go from A to B', and chain constructs like this to accomplish a task.
With an LLM I can give the drone/LLM system complex command that I'd never be able to encode to a controller alone. "Fly a grid over my neighborhood, document the location of and take pictures of every flower garden".
And if an LLM is just a 'text generator' then it's a pretty damned spectacular one as it can take free formed input and turn it into a set of useful commands.
They are text generators, and yes they are pretty good, but that really is all they are, they don't actually learn, they don't actually think. Every "intelligence" feature by every major AI company relies on semantic trickery and managing context windows. It even says it right on the tin; Large LANGUAGE Model.
Let me put it this way: What OP built is an airplane in which a pilot doesn't have a control stick, but they have a keyboard, and they type commands into the airplane to run it. It's a silly unnecessary step to involve language.
Now what you're describing is a language problem, which is orchestration, and that is more suited to an LLM.
Give the LLM agent write acces to a text file to take notes and it can actually learn. Not really realiable, but some seem to get useful results. They ain't just text generators anymore.
(but I agree that it does not seem the smartest way to control a plane with a keyboard)
My confusion maybe? Is this simulator just flying point a to b? Seems like it’s handling collisions while trying to locate the targets and identify them. That seems quite a bit more complex than what you are describing has been solved since the 1930s.
LLMs can do chat-completion, they don't do only chat completion. There are LLMs for image generation, voice generation, video generation and possibly more. The camera of a drone inputs images for the LLM, then it determines what action take based on that. Similar to if you asked ChatGPT "there is a tree in this picture, if you were operating a drone, what action would you take to avoid collision", except the "there is a tree" part is done by the LLMs image recognition, and the sys prompt is "recognize objects and avoid collision", of course I'm simplifying it a lot but it is essentially generating navigational directions under a visual context using image recognition.
Yes it can be, and often is. Advanced voice mode in chatGPT and the voice mode in Gemini are LLMs. So is the image gen in both chatGPT and Gemini (Nano Banana).
"You don't need the "language model" part to run an autopilot, that's just silly."
I think most of us understood that reproducing what existing autopilot can do was not the goal. My inexpensive DJI quadcopter has an impressive abilities in this area as well. But, I cannot give it a mission in natural language and expect it to execute it. Not even close.
I don't understand. Surely training an LSTM with sensor input is more practical and reasonable way than trying to get a text generator to speak commands to a drone.
The fact that a language model can „reason“ (in the LLM-slang meaning of the term) about 3D space is an interesting property.
If you give a text description of a scene and ask a robot to perform a peg in hole task, modern models are able to solve them fairly easily based on movement primitives. I implemented this on a UR robot arm back in 2023
The next logical step is, instead of having the model output text (code representing movement primitives), outputting tokens in action space. This is what models like pi0 are doing.
I mean semantically language evolved as an interpretation for the material world, so assuming that you can describe a problem in language, and considering that there exists a solution to said problem that is describable in language, then I'm sure a big enough LLM could do it... but you can also calculate highly detailed orbital maps with epicycles if you just keep adding more... you just don't because it's a waste of time and there's a simpler way.
The latter part is interesting. I'm not sure how the performance of one of those would be once they are working well, but my naive gut feeling is that splitting the language part and the driving part into two delegates is cleaner, safer, faster and more predictable.
note that the control systems you were talking about before (i.e. PID) would probably take hold pretty directly in a tiny network, and exactly because of that limitation, be far less likely to contain 'hallucinations'. object avoidance and path planning are likely similar.
since this is a limited and continuous domain, its a far better one for neural training than natural language. I guess this notion that a language model should be used for 3d motion control is a real indicator about the level of thought going into some of these applications.
> This is largely about extending that thesis to the entire ecosystem. No GH issues, no PRs, no interaction. No kudos on HN, no stars on github, no "cheers mate" as you pass them at a conference after they give a great talk.
This feels like one of those tropes that keeps showing up whenever new tech comes out. At the advent of recorded music, im sure buskers and performers were complaing that live music is dead forever. Stage actors were probably complaining that film killed plays. Heck, I bet someome even complained that video itself killed the radio star. Yet here we are, hundreds of years later, live music is still desirable, plays still happen, and faceless voices are still around, theyre just called v-tubers and podcasters.
> This feels like one of those tropes that keeps showing up whenever new tech comes out.
And this itself is another tired trope. Just because you can pattern match and observe that things repeatedly went a certain way in the past, doesn't mean that all future applications of said pattern will play out the same way. On occasion entire industries have been obliterated without a trace by technological advancement.
We can also see that there must be some upper ceiling on what humans in general are capable of - hit that and no new jobs will be created because humans simply won't be capable of the new tasks. (Unless we fuse with the machines or genetically engineer our brains or etc but I'm choosing to treat those eventualities as out of scope.)
Give me one aspect in which that has actually happened? I'm wracking my brains but can't think of one. We are a weird species in that even if we could replace ourselves our fascination with ourselves means that we don't ever do it.
Cars and bicycles have replaced our ability to travel at great and small distances and yet we still have track events culminating in the olympics.
Sure, things continue to persist as a hobby, a curiosity, a bespoke luxury, or the like. But that's not at all the same thing as an industry. Only the latter is relevant if we're talking about the economy and employment prospects and making a living and such.
It's a bit tricky to come up with concrete examples on the spot, in particular because drawing a line around a given industry or type of work is largely subjective. I could point to blacksmithing and someone could object that we still have metalworkers. But we don't have individual craftsmen hammering out pieces anymore. Someone might still object that an individual babysitting a CNC machine is analogous but somehow it feels materially different to me.
Leather workers are another likely example. To my mind that's materially different from a seamstress, a job that itself has had large parts of the tasks automated.
Horses might be a good example. Buggies and carriages replaced by the engine. Most of the transportation counterparts still exist but I don't think mechanics are really a valid counterpart to horse tenders and all the (historic) economic activity associated with that. Sure a few rich people keep race horses but that's the sort of luxury I was referring to above. The number of related job positions is a tiny fraction of what it was historically and exists almost solely for the purpose of entertaining rich people.
Historically the skill floor only crept up at a fairly slow rate so the vast majority of those displaced found new sectors to work in. But the rate of increase appears to have picked up to an almost unbelievable clip (we're literally in the midst of redefining the roles of software developers of all things, one of the highest skilled "bulk" jobs out there). It should be obvious that if things keep up the way they've been going then we're going to hit a ceiling for humans as a species not so long from now.
Tin Pan Alley is the historical industry from before recording: composers sold sheet music and piano rolls to publishers, who sold them to working musicians. The ASCAP/BMI mafia would shake down venues and make sure they were paying licensing fees.
Recorded music and radio obviously reduced the demand for performers, which reduced demand for sheets.
umm, I don't know if you've seen the current state of trying to make a living with music but It's widely accepted as dire. Touring is a loss leader, putting out music for free doesn't pay, stream counts payouts are abysmally low. No one buys songs.
All that is before the fact that streaming services are stuffing playlists with AI generated music to further reduce the payouts to artists.
> Yet here we are, hundreds of years later, live music is still desirable, plays still happen, and faceless voices are still around...
Yes all those things still happen, but it's increasingly untenable to make a living through it.
Artists were saying this even before streaming, though, much less AI.
I listen pretty exclusively to metal, and a huge chunk of that is bands that are very small. I go to shows where they headliners stick around at the bar and chat with people. Not saying this to be a hipster - I listen to plenty of "mainstream" stuff too - but to show that it's hard to get smaller than this when it comes to people wanting to make a living making music.
None of them made any money off of Spotify or whatever before AI. They probably don't notice a difference, because they never paid attention to the "revenue" there either.
But they do pay attention to Bandcamp. Because Bandcamp has given them more ability to make money off the actual sale of music than they've had in their history - they don't need to rely on a record deal with a big label. They don't need to hope that the small label can somehow get their name out there.
For some genres, some bands, it's more viable than ever before to make a living. For others, yeah, it's getting harder and harder.
Is it though? Think about being a musician 200 years ago. In 1826 you needed to essentially be nobility or nobility-adjacent just to be able to touch an instrument let alone make a living from it. 100 years later, 1926 the barrier to entry was still sky high, nobody could make and distribute recordings without extensive investment. Nowadays it's not uncommon for a 17 year old to download some free composer software, sign up for a few accounts and distribute their music to an audience of millions. It's not easy to do, sure, but there is still opportunity that never existed. If you were to take at random a 20 year old from the general population in 1826, 1923, 1943, 1953, 1973, 83, etc, would you REALLY say that any of them have a BETTER opportunity than today?
> I kept hitting the same wall: charting libraries that claim to be "fast" but choke past 100K data points
Haha, Highcharts is a running joke around my office because of this. Every few years the business will bring in consultants to build some interface for us, and every time we will have to explain to them that highcharts, even with it's turbo mode enabled chokes on our data streams almost immediately.
Post last week I put in an Arc B580 and I had some issues at the start, but that's more to do with the fact that my workstation has a Haswell Xeon v3... Otherwise it was just turning CSM off.
reply