I don't think that's the case. This is the official announcement of a product, with details on how the product works, and I doubt they'll be writing anything particularly more meaty when they proceed through various rollout stages.
The way I've understood the "announcement of an announcement" rule in the past is that it is for reports/rumors that somebody is company to release something, or a company sending a teaser for a future event where they'll announce something.
The standard has never been that the product needs to already be generally available to be on-topic. If that were the standard, when could we discuss say a new iPhone? Not when it's talked about on the big Apple stage. Or as another example consider yesterday's discussion of the changes to the Google account deletion policy. No accounts will be deleted for 9 months, was it an announcement of an announcement?
Google should learn from Apple and their prior Gmail launch how the announcement should have a clear call to action. Else they created buzz that was not capitalized on.
and then it can quite often achieve what you want with just a comment or some previous code snippets.
Then if you load in pandas and a DataFrame you want to analyse, it quite often suggests the next stage of your data-analysis work, either right away, as you write the first step in some chain of pandas methods.
I saw this announcement a few days ago and I am enthusiastic. Even though I only use Colab 3 or 4 hours a month, it is a good product and so I happily pay $10/month for it. I hope Codey is available on the $10 plan.
I have been using CoPilot for about a year (mostly in Emacs, sometimes in VSCode) and I find it very useful, so I am especially excited to see what Google offers.
I also find myself using Bing+ChatGPT a lot for writing short functions in a variety of programming languages. All of these tools, at least for me, eliminate most of the tedium in my programming projects.
we'll start on Pro+, but plan to support Pro as well, including w/ autocompletions (those may slip if cost/usage comes in too high, but so far in dogfooding it's very reasonable).
Would be nice to try, until I got banned yesterday for trying to connect two Colab runtimes together via SOCKS5. Turns out connecting to any proxy at all immediately suspends your Colab usage for "suspected abusive activity"!
Already did, but I accidentally did this once before and appealed and got unbanned some hours later. It's been almost 48 hours since now, so I don't think they're too forgiving for repeat offenses.
I have no dog in this fight - but your actions give the appearance of probing for weaknesses if you already knew this was not allowed from your prior accident.
Not really, I just wanted to bypass ratelimits from certain download sites so I figured that maybe rotating notebook sessions would work. Silly me for not checking TOS every other week :P
(Shamelessly spreading the word about my open source tool)
You can do GPT-4 powered coding chats in your terminal today with aider. It's not free in the sense that you need a gpt-4 api key, which openai charges you for. But it's a pretty great way to collaborate with AI on code.
Ya, that one is some real work I was doing on the tool itself by using the tool. I use aider extensively while I code now. Aider has authored 150+ commits in that repo in the last week or so.
Many of the other transcripts are more "toy problems" to help convey the experience of working with aider to folks unfamiliar with the tool.
Right now aider doesn't even try and deal with files larger than the context window. For gpt-4, that's 8k tokens or about 32kbytes. Which is pretty reasonable.
According to the data in [1], the average source file in github is less than 14KB. It's worth noting that they explicitly discarded small files with less than 10 lines. So the true average is probably much lower.
Regardless, I haven't found the gpt-4 context window issue to be problematic in practice yet. You do need to be careful about how many files you "add to the chat" at once. But that's not hard. For sure there are cases where you would need to refactor a large file before you could use it with aider.
I am certainly interested added context window management features to aider. I have previously explored a bunch of approaches for this with gpt-3.5-turbo. I shared some notes about these experiments previously on HN [2].
Thanks for the thoughts on this. Ya, I am familiar with the technique of summarizing the chat history. It's on my todo list for aider.
In the [2] link I included above, I discuss the approach you suggest of summarizing the meat of most code. It didn't work well with gpt-3.5-turbo, but I do plan to revisit it again with gpt-4 at some point.
Again, I haven't actually exhausted the context window often in my own use of aider. So it hasn't been a problem in practice. But no doubt it will be an issue for certain code bases and source files. I am quite interested in finding an effective solution, and plan to put in more effort here in the future.
I really think the current paradigm of literally typing if else logic all day into programs, and then getting paid huge money to do so will go away. Programming is going to be much higher level and accessible, though still complex. It will take five years or so.
Edit:
My prediction is essentially that:
You give it a set of requirements for an api, with edge conditions written in plain English. Test cases are provided. And the vanilla api is generated. We aren’t that far from this. It’s going to happen. Programmers may go in and tweak the generated code. When the requirements change, you pass in the old code as context.
Complex software will require human coding. But most programmers are in denial about how complex the code they’re writing actually is.
Requimemts gathering is notoriously tricky, but you won’t need engineers for it. Welcome to the age of the PM
>You give it a set of requirements for an api, with edge conditions written in plain English. Test cases are provided. And the vanilla api is generated.
Sometimes I wonder where some of the posters here work or if I'm working in a dystopia.
>You give it a set of requirements for an api with edge conditions written in plain English
This part is the job! If my job was 100% writing logic, it would be infinitely easier. Defining the requirements, evaluating the tradeoffs, discovering the edge conditions is where the bulk of my time goes. The only time someone did this for me was when I was a junior developer. Maybe I'm overestimating things, but I find it hard to believe that most engineers pulling huge salaries are just shuffling around fields on JSON API. Do you really need AI to expose a CRUD interface to Postgres?
Edit:
This idea that LLMs will replace engineers (or lawyers, or any traditionally "skilled" field) is hype. It's the same mistake that the customer makes when he's shocked that he gets a bill for $10,000 for replacing a screw in his car engine. You are conflating the actual physical labour requirements of the job (sitting down and coding) with the actual knowledge value that is being used when you do the job.
For example, take a look at Redis. Redis is a great codebase, especially for those that want to learn C. It's simple - there are exceedingly few mind-bending, hardcore, engineering algorithms in Redis. Antirez is an amazing software engineer, but Redis is not the fastest database, nor is it most durable. But what you see is the meticulous application of understanding engineering tradeoffs; there are things Redis does incredibly well and things that it doesn't that is made easier by the overall architecture of the code. How would you even begin to prompt this to an LLM? Again the code isn't complex, but the engineering is, and the act of turning those ideas and communicating them either to an LLM or to a C compiler, is engineering.
No one comes home from a long day and says "Honey, I'm tired I spent all day typing JSON schemas and function signatures".
Going to guess you're both young and working at a tech company as opposed to a big company that also does some tech things.
Prior to the rise of Agile and the decrease in waterfall-style development the role of Business Analyst was very popular. It still is at many companies that do need to do waterfall development, but less so these days. The Business Analyst (BA) role is semi-technical and requires some subject matter expertise, but generally doesn't do much actual writing of code. Instead it's focused on requirements gathering, wireframe creation, test case creation & running, bug logging, and status reporting. I know you're probably saying "that's at least half of my job!" and you're right, but now go and look at the difference in pay between a BA and a developer. It turns out that if you remove the "actually writes code" part of the job, it's not worth nearly as much and I think that's what the comment above is trying to express.
In my experience, most of these aspects – requirements gathering, wireframing, test case specification, even bug logging – have a business side and a technical side.
I think Product Managers often take on a lot of these BA responsibilities, they've got the domain knowledge, but there's another step of translation from "build an X that does Y" to "build an X, in A, that does Y, but not Z, in B time, integrating with systems I, J, K". I think that takes an engineering perspective.
I've not worked with anyone titled Business Analyst that does the role you've described, but working with PMs, BI people, and others, there's a large gulf between requirements and tests that they would specify at their level, and requirements and tests that a programmer could work from.
In my experience, it's a complete myth that requirement gathering can be done in a box.
"Requirement gathering" is almost always influenced by technical capabilities. All but the simplest projects, require back and forth with guestimates about the direction and approach.
This is because its difficult to decouple the code from the requirements. And the full requirements are never contained within a doc. So you need an engineer to look at the code to see how its done. This is going to change.
So someone who doesn’t understand code is going to know whether some AI generated description/summary of existing code is accurate, without being able to verify that in anyway? And then create requirements based on that summary? I use GPT-4 as a coding assistant pretty much every day, but it a. makes mistakes or strange assumptions all the time b. can’t parse the logic of or describe complex codebases c. fails to account for obvious edge cases, like an array being empty instead of populated. I mean, maybe some companies will do this, but I hope never have to use their products or they contain any sensitive information.
Again, where do you guys work? Maybe lets not use software engineering. If I asked you today, to do requirements gathering for building a bridge across the Mississippi river, could you do it? Don't you think you would have to learn about materials, regulations, land procurement, tensile and compression strengths and all the tradeoffs of the above? And once you learned all of that and you can finally present the requirements, would it be fair to call you non-technical?
I have to strongly disagree with this. If someone non-technical does the requirements gathering, the requirements will be incomplete. Developers will then either assume their own requirements or, if they’re thorough and good at their job, check these assumptions with the business. Either way, this wastes the company’s time. Either the assumed requirements will require re-work or the company has paid two employees to do the same job that should’ve been done by one.
Here's a different 10-year prediction. AI will become good enough to be useful for programmers but not good enough to run on its own and will remain an assistant. Given that there is a shortage of programmers, more software will be written than ever. More software will beget more software engineers. Because the output of software engineers will rise, each individual software engineer will become more valuable to their business and salaries will rise.
Having played a bit with coding with AI I like this view, it's powerful, but context will always be an issue, turning real world problems into working code is a skill in it's own right, that will not change, superpowered yes, for me it's all the 'dull' parts of typing stuff out that I look forward to missing, from an early study on impact in the workplace, using a 2 year study of a call centre, the result was large improvement for less skilled staff as knowledge of best practice from those really excelling at customer service was rapidly propagated to juniors, seniors saw a much smaller increase in performance metrics. Now there is no excuse for me not to get AI to write my comments, a bunch of tests and some API's and database interconnects, etc. I've always taken a modular iterative approach, create a working basic model with a good foundation then extend it keeping it working as I build up to the final deliverable. Tempting just to go and sit in a cave for a couple of months and come back when the tools are a little more refined :)
Jesus, this just triggered another nightmare scenario I should have thought of earlier.
People are going to have it write and/or comment a function whose purpose is non-trivial and not straight-forward.
It will be not or imperfectly checked that the comment actually says what it should and the functionality matches.
There will be no indication of what was written by AI, so the only option is to guess that both were written competently/with the same purpose, but when they don't actually match there's no way to know which is wrong except to check all other code that interacts with it and figure out what has been working right/expecting what was originally intended in ways that subtly break/have been working right depending on the incorrect implementation that existed.
This is absolutely something that already happens with fully human developers, but it seems likely to be much more frequent and not caught as soon with AI assistance.
This also seems like a failure mode that could go pathologically wrong on a regular basis for TDD types.
> function whose purpose is non-trivial and not straight-forward
Nah. It will just be tons and tons of trivial, straight forward code, calling other trivial code, that calls more trivial code.
Yes, some of it will be created manually, but if you go with "the IA created it" you will get it right 99% of the time. And obviously, the more code, the more it will fail; but people always try to fix this with more code.
And then you'll be paying for the AI assistance to help navigate all the code that it wrote. That's in the Enterprise tier. Set up a sales call to learn more.
An AI may be more likely to hallucinate the wrong comment, but also much less likely to forget to update the comment when the code changes. The net result could be better comments.
10-15 years out AI will be good enough to wipe out at least half of all software developers on the planet (that is, people writing actual code). That's not a risky prediction, that's very easily going to be the case.
Those people have nowhere to go to match what they're earning now. Maybe support review roles (which won't pay particularly well), where they approve decisions by the AI that are held up for human approval.
The bottom half of software developers won't have AI assistants. The AI will have them as human assistants (required by corporations for control/safety/oversight purposes).
The ~50%-25% bracket will build software using AI tools and will rarely write the actual code.
In the top ~25% bracket (in terms of skill) you'll have software developers that are still paid very well and they'll directly write code, although not always. That group will be the only one remaining that is paid like today's software developers get paid.
Software developer in the future will most commonly mean someone who builds software via AI tools (with the AI writing nearly all of the actual code). Human software developers will be glorified prompt wizards (with required degrees; it won't be a great job).
For the median software developer, the peak has already been reached (in terms of pay and job security).
Emerging market software developers will be hammered before their economies can fully benefit from the relatively high pay of the industry (from off-shoring work from big tech).
The golden run is over for the bottom 3/4 of software developers. Prepare for it. Get used to it. In the developed world the ladder up and out of the middle class via software development is going to go away (and quickly).
To regularly write code in the future you'll have to be damn good. Good enough, and knowledgeable enough, to be better at what you're doing than an AI with a handler (human assistant). You'll be writing the AI systems that govern everything and it'll be increasingly regulated, with more government licensing (plausibly formal AI engineer licensing and actual accountability, because the risks will go way up).
>10-15 years out AI will be good enough to wipe out at least half of all software developers on the planet (that is, people writing actual code). That's not a risky prediction, that's very easily going to be the case.
How is this not a risky prediction?
1. Are you talking about AI (i.e. AGI) or LLMs? If you think we will have AGI in 10 - 15 years maybe you are right, but AGI is always just around the corner.
2. LLMs don't seem to be the magic multiplying force people are insinuating. GPT-3 has been around for almost 3 years, and while great (I was an early adopter) it's just another tool for me.
Over the past 10 - 15 years software has gotten easier to develop. And make software easier to develop has just gone to serving greater and greater demand. The invention of C didn't reduce the number of engineers because it was simpler than ASM. The invention of Python didn't reduce the number of Java engineers. Every year CPUs get faster and faster (at an exponential rate almost), and yet software somehow manages to get slower and slower. No other tool in the short history of software development ever did anything close to "wipe out at least half of all software developers" despite newer tools because easier and easier.
I'm talking about AI - made up of multiple modules that focus on different aspects - that can comprehensively build software products, with nothing more than human sign-offs to get from start to finish. Nothing even remotely close to the difficulty of building AGI (which I don't think will happen in the next 30 years at least).
In this scenario the AI has human assistants that sign off on decisions before the AI can proceed further, before it can continue writing more code. The human developers that build this system will include a judgment for checkpoints, for the AI to judge when it thinks it's necessary to ask permission from a human to proceed (hey human, do you want me to go this way or that way?). The AI will occasionally present the human prompt clicker to choose from multiple viable development paths (for example overnight it'll build three different approaches to solving a problem after you leave work at 5pm; when you come in in the morning your task will be to pick the one you think is best, and the AI will proceed from there). I think a pattern of: AI development -> checkpoint approval by human -> continued AI development, will be the superior way to utilize AI to build software (rather than letting it get too far down the wrong path by trying to let it build without oversight most or all of the way).
The human decision making process at checkpoints will become by far the slowest part of building software. The AI will spend most of its time waiting for someone to sign off on a path.
That's actually a great supporting point to why AI will wipe most developers out.
What you're referring to is one of the AI systems that I mention that the top ~25% will still write code for directly. It's a governing AI system.
Most software development doesn't involve tasks that can very easily kill people with N thousand pounds of fast moving metal.
Most software development is trivial by comparison in terms of challenge/complexity. It'll be wiped out accordingly. Why would you need anything more than a human handler to sign off on AI development as it goes down a path? It'll be able to build drastically faster than a median developer can and it can do it without getting tired (its productivity won't implode after 4-6 hours). All you'll need are some human handlers to approve key decisions during the process of development (to ensure you get to the end product that is desired).
We have a lot of stages to go through before we achieve the level of development you're describing, and I don't think you can confidently argue 10-15 years vs 5-10 years vs 20-30 years.
As with most technology job predictions, it won't happen the way most predict. Yes what defines a developer/swe will change, but our history has shown us that we are likely to see an increase of jobs in technology for the long term. I remain optimistic that most people will find new roles as they emerge.
Isn't that the nature of tech? In the past most programmers needed to focus on low level details while today most devs kit together libraries and services and yet there are more then ever and salaries are higher then ever.
I think nobody that enters tech expects that in 20 years we "code" as we do today but we will still build stuff and need to solve problems... and there are enough problems to solve.
That's pretty much the case. We keep building layers and abstracting away.
Being a prompt wizard won't pay as well as directly writing code for the AI systems. They're different layers. There won't be more people directly developing software than there are today in the US market, there may be more overall jobs in and around the process however (ie the tech industry will continue to expand to more people in terms of employment; benefits will weaken, median pay will fall).
Most software developers will be prompt wizards, with required degrees that say they know how to be effective prompt wizards. Then there will be a lot of supporting roles, oversight roles.
More jobs in the industry, lower skill levels, less pay.
I think we push complexity forward. I agree in the sense that the pure dev part will require less bandwidth for most but the free bandwidth allows us to push complexity forward into different domains. My father still needed to punch card to code and now we can setup an app with a few clicks world wide that uses NN to solve a task and that all by ourself.
So demand will be high for cross domain knowledge like Fullstack/ML + Domain X.
If you're a high skill programmer that has domain knowledge re AI, you'll do very well in the future, whether 5 or 20 years out.
We simultaneously won't need and won't want the majority of software developers that exist today (the sheer number of them), writing code in the future. That would be a bad outcome.
They're going to end up more valuable as prompt wizards and checkpoint decision makers, because the AI will be drastically better at writing code (in all respects) than they could ever be. And they're going to get paid less because more people will be able to do it, software development will become a lot less intimidating as a field. It'll be more mass market as a field, akin to being a nurse (4.2 million registered nurses in the US).
What you describe is true pretty much to all professions, even doctors who don't use their hands. 15 years is a lot in the current pace of things. Also, society will be completely transformed if this transpires, retraining will be completely normal. Not saying it's gonna be easy, I'm just saying this is much bigger than software developers.
I'm not sure if I've ever seen a situation where all the code for a company gets written and then the engineers just pack up and leave. Typically the more code you write the more code you plan to write.
> 10-15 years out AI will be good enough to wipe out at least half of all software developers on the planet (that is, people writing actual code). That's not a risky prediction, that's very easily going to be the case.
The only prediction I feel is easy to make is that nobody can accurately predict the effects of an emerging technology 10-15 years down the line. That is a long time, and prognosticators almost always get it wrong. Coding will absolutely change, but I'm very skeptical that anyone can predict how with any sort of specificity.
That's what they thought 50 years ago. Programming is not anymore "mov eax,1" and "int 0x80". Programming nowadays is much more highly accessible than it was 50 years ago... but still only a few (programmers) do it. AI is not going to change that (i.e., nor my manager nor my mother are suddendly going to "program" anything using high-level constructs)
I find that this comment adds absolutely nothing to the conversation. It could be posted under every comment that says <regex>"AI is (not)? going to revolutionize (the world|everything|this industry|my job)"</regex>
Anybody making a prediction about the future might be right or might be wrong, and we won't know until it happens. This is just a snarky way of saying "nah dude you're wrong"
It could be posted under anything. I could claim that the earth is going to turn into a ball of cheese and then write off everyone with “only time will tell.”
Well... how about reading my text trying to interpret it and the world around us? In the sense that things are going so fast that we will soon have the answer and most probably _some_ jobs will be replaced yes (and that it won't take another 50 years as implied in the parent comment to have the definitive answer for it).
You make a good argument (in this comment). I think things are moving really fast and some jobs will be replaced. But even though AI moves extremely fast, the industry as a whole moves much slower. It will take years to integrate and leverage these things. How many companies are still running legacy java applications? How many still run Cobol? Things will change eventually, but AI won't destroy a majority of jobs for many years.
I think this is a sad vision of the future not because a lot of programmers will be forced to do something else, but because it seems like this represents a fundamental failure for formal methods. We're hurtling towards a world where we automatically generate low-quality, buggy code, produced by stochastic parrots who mimic code written in possibly a different era by humans tackling different problems.
Forget about being employed producing software in a world where these are the norms, I don't think I'd want to be a software _user_.
I'd love for programming to be higher level and accessible, but I wish the process were:
- write a _specification_ that describes functionality, invariants, etc
- generate signatures compatible with that specification
- interactively seek implementation-relevant information from the designer, e.g.
- "I see Users can have an unbounded collection of Bars, which can each reference an unbounded number of Wugs. How many Bars do you suppose a typical User would have? How many Wugs per Bar and how many Bars will reference each Wug? How large is a Wug typically?"
- "Which of these DB queries do you expect to be run most/least frequently?"
- "This API method allows for pagination. How often do you expect a user to page beyond k?"
- generate tests
- generate code aligning with signatures, _and which cause tests to pass_ (i.e. program synthesis from examples)
- static analysis / abstract interpretation / model checking to confirm that some invariants are respected
- explicitly reporting which invariants or properties were not able to be confirmed automatically which a human may need to confirm via an ad-hoc analysis
Software is one of the domains where we can actually do a lot of sophisticated automated reasoning, but the current trend of ML code generation as text completion ignores basically all of that, and this seems like a giant waste.
I disagree with this as well. The number of people legitimately working on LLMs right now is probably only in the hundreds. One or a few huge models will do all the programming automation. But yes, high paying developer jobs will exist. The rest will not pay as well, and with much worse job security. This field is going to change very fast
Example: To be SOC 2 compliant, you need to have change management controls in place. How do humans manage change management if an AI is doing everything for the humans? Would humans still be doing code reviews? (AI might be so great where code reviews are obsolete, but compliance / regulations may still require humans in the loop)
There's also a -huge- subset of software that is extremely mission critical or heavily regulated by compliance where all current compliance frameworks assume there's a human in the loop. Ripping the human out of the loop would require re-writing the regulations / standards (which I guess AI could help with) but I think the change will come slower than you're predicting.
Change won't be slower because of any technological barrier necessarily (this is debatable), but because it will absolutely take a very long time for humans to fully trust AI and its output.
The era we're in currently feels like where Tesla was a few years ago. Really cool concepts and proof that self-driving "works", but there are so many edge cases which limit its complete roll-out and makes the original promise of "everything will be self driving in 5 years" seemingly achievable, but 5 years later it hasn't been achieved, and remains simply an assistive technology with many limitations.
"full stack" programming is actually incredibly hard since you can no longer push new items once the stack is full. If this happens you better go the route of "stackoverflow" programming.
A possibility, and I think the way we work will evolve as well. But coming from a more "numerical" background, I can imagine a different route. Twenty years or longer ago, people who wanted to process a larger amount of data needed to
understand low-level details, compilers, C/C++/Fortran, mathematical details, and so on. Today, we have JAX, scikit-learn, and many more tools. But these tools did not make the old 'numerical' people jobless; instead, their jobs evolved. Today, we have more data scientists than ever. You can create your own app faster than years ago, including hosting, persistent storage, load balancing, ... And again, we have more web developers than ever. The same goes for jobs like DevOps and other jobs that I probably don't even know. The level of abstraction got higher as you better now what the algorithm is doing but you do not need to implement it again. The point is the field will evolve, and right now, it may be the biggest jump ever, but that does not mean jobs will go away. We might end up in a situation like self-driving, where we are really close but still missing the last bit and need human intervention. I hope LLMs will solve the tasks that have been solved many times before, like bootstrapping a CRUD app, and we can focus on the edge cases and niche problems.
I've been using ChatGPT to generate scripts I sometimes need(ed) to write for testing and troubleshooting purposes.
For example parsing files, pulling data from a table and dumping it in a CSV, monitor performance counters or network activity, backing up and copying files etc.
I don't consider this part of my job, but it's stuff I need to do often and I'm glad it can now be delegated to a bot.
It also means new hires and co-ops have more interesting things to do.
Today for the first time I used code from ChatGPT in a working solution.
Nothing too advanced, a small routine to interpolate a vector.
It took me literary a minute to write the prompt and another maybe 5-10 to adapt it and test it.
I see myself using ChatGPT more and more for this type of things, automation scripts and very specific, simple routines in production code.
However this is not 5% of the stuff I do, for the other 95% there's simply no AI advanced enough to do it and there won't be any for several years or decades or maybe ever.
Now those working purely in CRUD applications from functional specs given my others, better to start switching to something that demands design skills and process know-how.
Who will write in plain English? Granted I'm not in software development, but work in an adjacent area. Communicating requirements is already a hard, unsolved problem.
In an adjacent discussion about ChatGPT, we will look forward to when nobody learns to write in plain English any more.
> Complex software will require human coding. But most programmers are in denial about how complex the code they’re writing actually is.
I think it's just a matter of time till all software becomes complex, e.g a codebase worked on by a few dozens people for 10 years will have a non trivial level of complexity. Not disagreeing or anything, the machines might be able to do this well, we'll see. It will have to improve by a lot, like almost no hallucinations.
But there could be many shades in between. Instead of completely replacing developers in 5 years which I personally find unlikely, it can replace 50% of them.
Thinking logically isn't going out of fashion any time soon. Excel lets you fit increasingly higher order polynomials to a time series graph but you still need someone to tell them that's not a good forecast
Since people are bad at describing what they want there will be some kind of person that deals with the business and makes the computer work.
Already we have prompt engineering. And also we could have said the same thing for people who write in assembly language, and each higher level language and also programs that are designed to be low code . I see that this is a new type of spreadsheet but scaling these up or using these programs aren’t going to make coders / software engineers / data scientists go away in fact historically we have seen the opposite.
What do you mean by “higher level”? AI code generation tools are extremely useful to people who know how to code, but how is it useful if you can’t understand the code it spits out. They are far from bug free and prompting it to handle complex business logic is tricky. If the higher level paradigm is simply English, then I don’t see it. If something else, I’m curious what you foresee.
You give it a set of requirements for an api, with edge conditions written in plain English. Test cases are provided. And the vanilla api is generated. We aren’t that far from this. It’s going to happen.
Complex software will require human coding. But most programmers are in denial about how complex the code they’re writing actually is
we debated this; this is going to be a sequential rollout based on availability/capacity, and it seemed better to broadcast it was coming earlier b/c we're not going to have a "launch event" moment.
Arguably it would be good to not announce till it’s both available because even once released, expectations are going to be extremely high due to copilot.
We had a couple SWEs move us from 3.7 to 3.10 over the past six months, removing years of tech debt. This comment is dead wrong. We'll upgrade to 3.11 when they push their final regular bug fix release in April 2024.
Make fun of my stupid blog post all you want, but my eng partners are awesome!
It specifically says in multiple areas of the article that it's not available today.
"Google Colab will soon introduce..."
"Access to these features will roll out gradually in the coming months, starting with our paid subscribers in the U.S. and then expanding into the free-of-charge tier. We'll also expand into other geographies over time. Please follow @googlecolab on Twitter for announcements about new releases and launches."
This one, Codey - from the headline I thought that was what was being pre announced. It’s hard to do AI assisted coding if you’re not in everyone’s editor and your competitor already is.
"We won't charge you for providing us with training data"
Also, I'm sure they have ways around it, but I'd imagine colabs are a poor source of "good" code to use for further model training, both because of the kind of code you write in notebooks and the demographic that would make up colab users. It sort of fits with the idea that autocomplete might be good at writing short functions that do some specific thing, but not much help actually writing a full program.
The problem for users (not google) is that its an arbitrarily enforced restriction. Of course companies can do whatever they want. Google could ban all programs that import PyTorch on their free tier, too. Totally their choice. It’s also my choice to mention that people should look into alternatives like Replit if they’re worried about a cool notebook they made on Colab randomly getting them in Google TOS trouble.
Does Replit give free access to GPUs for any use case? The documentation[0] has screenshots that imply that GPUs cost "cycles", which are a virtual currency you buy with real money[1].
Soon meaning hopefully with the release of palm 3? From the few tests I ran with palm 2 in bard, it's... well not absolutely horrible, but still quite a bit worse than ChatGPT3.5.
Alexa build me a Google (or is it the other way around?)
As more people use automated programmaking, most of the code will be low level spaghetti in procedural languages, because it is easier for the model to reason with it (without needing huge buffers to know about abstractions). We will see a lot of generated Javascript, PHP, even a return of Win32. Who cares if they are tedious for humans to read and fix, now the AI can do all kinds of advanced search/replace in the code. And maybe at some point they 'll be generating machine code directly
+1 all Colab files are the standard Jupyter .ipynb extension: we very much want to avoid any notion of lock-in. We bundle them all into a folder for you in Drive: it takes you like a second to take all of your work to any competitor.
"OK, team. With our founding cash cow threatened, we're at a critical juncture that could preserve or break us. If we're to survive, we must make the most of every factor of success at our disposal. This includes branding, and that's why we hire you, some of the best in the world at it. First major disruptive tech up for branding is a code LLM. Brainstorm now."
> Democratizing machine learning for everyone: Anyone with an internet connection can access Colab, and use it free of charge. Millions of students use Colab every month to learn Python programming and machine learning. Under-resourced groups everywhere access Colab free of charge, accessing high-powered GPUs for machine learning applications.
Ugh, cringe. Just say this is a panic swing at an existential threat (OpenAI) and you're trying to commiditize them.
This looks like an announcement of an announcement, which isn't on topic for HN. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
On HN, there's no harm in waiting for the actual thing: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...