Great idea. Whether brainstorm mode is actually useful is hard to say without trying it out, but it sounds like an interesting approach. Maybe it would be a good idea to try running a SWE benchmark with it.
Personally, I wouldn't use the personas. Some people like to try out different modes and slash commands and whatnot - but I am quite happy using the defaults and would rather (let it) write more code than tinker with settings or personas.
Fair enough on personas, I like to activate skills more than personas, for example I activate the auto commit skill to ensure the agent would automatically commit after finishing a feature
I understand it. For example, with AI you don't need to remember stuff. Like there is a command in MacOS (two actually) to flush the DNS cache. I used to memorize it because I needed it like twice a week. These days, I can't remember it. I just tell Copilot to flush the cache for me. It knows what to do.
And it's like that for many things. Complicated Git commands that I rarely need. I used to remember them at least 50% of the time, and if not, I looked them up. Now I just describe what I need to Copilot. But also APIs that I don't need daily. All that stuff that I used to know is gone, because I don't need to look it up anymore, I just tell Copilot or Claude what to do.
Is that really a bad thing? It's like saying Google Maps makes you lazier, because you don't have to learn navigation. And, heck, why stop there: cars are just insanely lazy! You lose all the exercise benefits of walking.
Why is losing the ability/interest in navigating through a paper map by hand bad, though?
Humanity has adopted and then discarded skills many times in its history. There were once many master archers, nobody outside of one crazy Danish guy has mastered archery for hundreds of years. That isn't bad, nobody cares, nothing of value was lost.
You can still use pencil and paper for the difficult things. In fact, you'll have more time for doing so, because you don't have to use pencil and paper for the simple things.
Hm, perhaps a way to export all your chats from any AI provider you use + sending it back to an LLM to just sum up all the commands that you use in a text file that you can reference?
Like I am starting to use etherpad a lot recently and although I have proton docs and similar, I just love etherpad for creating quick pads for information
Or to be honest, I search it on the internet and ddg's AI feature does give me a short answer (mostly to the point) but I think that there are definitely ways to get our own knowledge base if any outage happens basically.
lol I also had all sorts of commands memorized for k8s and pandas I don't remember at all. But let's all be honest, was it valuable to constantly lookup how to make a command do what you want?
I wasted so much time on dumbass pandas documentation search when I should have been building. AI is literally the internet all you are doing is querying the internet 2.0.
I often kept vast ugly text documents filled with random commands because I always forgot them.
I recently had the same realization and moved all my functions to a simple stand-alone server. Besides the normal AWS costs, what scares me most about AWS is the possibility that someone could try to DOS me, leaving me with a huge AWS bill, because there is no real way to limit AWS spending.
The main reason why I keep coming back to cloud providers is databases. I don't feel comfortable setting up a high-availability db setup, and I don't want the responsibility of managing backups. But if you go to, say, Hetzner, you won't be able to use a cloud database in the same network.
Let me ask you a related question: if there was a study that handwriting is better for your brain than typing, should secretaries have quitted when typewriters and computers were introduced?
The thing is, there is no going back. There will be no significant demand for output that's created by humans even though a machine can do it as well. You can try to find a niche where AI is worse than humans. But that will be increasingly difficult to find.
So if you want to continue doing things without AI, that's fine. But most likely it will be a hobby, not a job.
As Scott Adams would tell you, it doesn't matter whether he made it up. If you believe him, that's your reality. If you don't believe him, your reality is that he's faking it. You can chose your reality and that act upon it.
A lot of comments here mention his comics or his controversial pro-Trump opinions in the last 10 years, but I would like to emphasize and point out his influence he had over the lifes of so many people with his life strategies and explanations, microlessons, memes and ways to look at the world. Like
* systems over goals: the theory that you shouldn't set yourself specific goals, but instead just find a system how to work towards your goals regularly
* talent stacks: the theory that, in order to succeed in life, you don't need to be the best in one skill, but good enough in a useful combination of several skills that can be used together
* the idea that managing your energy is more important than managing your time
* the Adams rule of slow moving disasters: any kind of disaster that takes many years to manifest can be overcome by humanity. Scary are those disasters that don't give you enough time to react.
* rewiring your brain: that by finding the right way to look at something, you can modify your own behavior. He wrote a whole book full of recipes to change your behavior and feelings.
* despite not listening to Rap, a long time ago when Kanye West had one of his first successful songs, someone sent Adams the lyrics to some song and by looking at the lyrics Adams recognized West as a unique genius
* you should never trust a video as proof of anything, if you can't see what happened before or after. It's most likely taken out of context. Just like most quotes are worthless without context.
* "perception is reality": that how someone perceives a fact is more important than what actually happened
* "simultaneous realities": realities are shaped by how people perceive them. And two people can disagree on something, while both are right at the same time, because they view the same thing through two different lenses and thus live in different realities.
* TDS (Trump Derangement Syndrome): the observation that many people hate Trump so much that they lose the capability of rational thought and either just shut their brain down when talking about anything related to Trump, or want to do the opposite of what Trump wants
* "word-thinking": when someone find labels for things or people, and then forms opinions based on the label
* detecting cognitive dissonance: when someone just shuts down their brain because the experienced reality doesn't match their expectation
* "tells for lies", like analyzing people on TV and looking for clues that they lie
* coining the term "fine people hoax" for a video snippet that was constantly repeated on media to show Trump having one opinion, even though when watching the whole video it was clear that he meant the opposite.
* "logic doesn't win arguments", the rules of persuasion, and the theory of 'master persuaders'
* he predicted Trump winning the 2016 election when Trump had just announced his campaign, long before the primaries, because he recognized a 'master persuader' in him.
And there are probably many more things I don't remember right now, but his books and blog shaped my way of thinking, and I am using his way of looking at the world every day.
I must admit I didn't really follow 'Coffee with Scott Adams' - I think he kind of jumped the shark when having to fill at least 30 minutes every day, and I am not that interested in politics. But that doesn't diminish his accomplishments.
I've always thought the definition of TDS was completely backwards. I've too often seen legitimate criticisms of Trump deflected with claims of TDS. Certainly it's the zealous cult-like worshipping of Trump that's deranged.
It can certainly also be used the other way round by people who defend Trump no matter what. But I have seen enough people who clearly weren't even able to discuss Trump's policy because the thought that Trump could be right about anything was unacceptable to them. And often that thought caused a very emotional reaction.
That fits the pattern of projection that crowd tends to engage in. Same thing for the "Woke Mind Virus" actually being the infection that affects them.
> TDS (Trump Derangement Syndrome): the observation that many people hate Trump so much that they lose the capability of rational thought and either just shut their brain down when talking about anything related to Trump, or want to do the opposite of what Trump wants
This isn't a real thing, it's just something his zealots throw at critics to dismiss them.
It's the equivalent of responding on Reddit with "straw man". It's meant to be a conversation finisher where the writer declares victory. But they aren't saying anything at all.
I think "number of questions asked" is the wrong metric. Because it feels like all the questions have already been asked. Whenever I need to know something, I can google it and find answers on Stack Overflow. I can't remember the last time I actually had to ask something. Or the last time I found a question that didn't already have a good answer. Stack Overflow's library of question is pretty complete, and the only reason for new questions are new tools.
Certainly LLMs are a huge factor, but I feel that LLMs rarely give good (and trustworthy!) answers to the things I would check on Stackoverflow. Just like LLMs are no good replacement for API references because they get the details wrong all the time.
I think this is true if there aren't new questions to be asked. But technologies shift and evolve all of the time.
One of my top StackOverflow questions for years was around the viability of ECMAscript 6. It's now essentially irrelevant because it's found wide adoption in browsers etc. but at the time a lot of people appreciated the question because they wanted to adopt the technology but weren't sure what its maturity was.
It's also true that some technology stacks mature to a point where there isn't much more to be asked but I think there will continue to be a place for forums of discussion where you can ask and get answers around newer, bleeding edge technologies, use cases etc.
I find that most of the time, when doing research on anything non-trivial, I find a question on SO about this exact problem that has no answers because it was closed by the mods as a duplicate of something that doesn't actually answer that question (but rather something very vaguely related to it).
I think a couple of years ago there was a real window of opportunity for crypto. People could have invented crypto versions of real-life things like insurances and mortgages.
But that never really happened, because everybody was busy getting rich with shitcoins and ape NFTs. Which seemed easier than building up complex organization on a crypto foundation. Along the way they destroyed all trust and goodwill that Bitcoin had created. Maybe, in a few years, there will be another opportunity, but right now it will be hard to convince anyone to invest money in a crypto business.
> People could have invented crypto versions of real-life things like insurances and mortgages.
> But that never really happened, because everybody was busy getting rich with shitcoins and ape NFTs.
I think the failure of the former lead to the latter. The first decade had people trying to build things which were potentially useful to normal people, but the utter failure to produce a competitive solution to anything lead to everyone invested in cryptocurrency following increasingly risky endeavors because they knew that they were not going to survive, much less get rich, competing with the safer and more efficient financial sector.
My litmus test for has been PayPal: they’ve screwed so many people over the years, and their fees are enough that a viable alternative is going to get interest from a lot of people. I’ve bought a fair amount of stuff online over the time cryptocurrencies have existed, from individuals up to huge companies and NOBODY has ever encouraged cryptocurrency or, in all but a handful of cases even accepted it (I think I did buy something from NewEgg during the period where they accepted Bitcoin but instantly sold it). Given the cost of credit card and PayPal transaction fees, that’s just complete failure because there is absolutely a huge market receptive to shaving points off of their overhead rates and not dealing with a company which not uncommonly creates major problems for sellers.
> People could have invented crypto versions of real-life things like insurances and mortgages.
Crypto makes large technical sacrifices for the sake of being harder to regulate. I don't think that's a desirable quality for insurance and mortgages.
> People could have invented crypto versions of real-life things like insurances and mortgages.
Was there every any demand for such things? Those products presume people would use cryptocurrency as money for real economic activity, which turned out to be very, very rare (and wild swings in value caused by speculation would make that very foolish).
The core problem with cryptocurrency seems to be an that it's an elegant technology that is not appreciably better (and in many ways much worse) than the technology it sought to replace (fiat currency). So there's sort of no point to it, but the technical elegance made too many software engineers go bonkers and fail to see it for what it really was.
Also, it turns out the obsessions of very online libertarians aren't widely shared in the general population. So things designed to cater to those obsessions don't actually work as designed. They don't understand their "customers" for their vision.
Oh, but it is “better” in one very crucial way: it enables global transactions, that are difficult to trace yet accessible. Without crypto, we would not have the scourge of ransomware
Which makes sense, because very online libertarians basically imagine themselves to be criminals (either they imagine a totalitarian government out to get them, or they don't think the law should apply to them).
I think an AI should be treated like a human. A human can consume copyright material (possibly after paying for it), but not reproduce it. I don't see any reason why the same can't be true for an AI.
The issue is so much about consumption of copyright material, but acquisition of that material.
Like a real person, AI companies need to adhere to IP and license or purchase the materials that they wish to consume. If AI companies licensed all materials they acquired for training purposes, this would be a non-issue.
OpenAI are looking for a free pass to break copyright law, and through that, also avoid any issues that would arise through reproduction.
A real person wouldn't have to pay to read random blog, Reddit comments, StackOverflow answers or code on GitHub (many open source licenses do not imply license for training).
They might have to pay for books, or use a library.
Should these cases be treated differently? If so, it might lead to more closed internet with even more paywalls.
I think those are less of an issue. They want to train on paywalled news articles, magazines and books. In addition to other media that the average person would have to pay for or would otherwise have limitations applied.
In my opinion, if any copyright related rule is applied to books or other paywalled content, it should equally apply any Joe Shmoe's blog or code on GitHub.
I'm more worried about sexual harassment, the decaying value of truth and creativity, and increasing the power of the surveillance state than I am about job loss.
Personally, I wouldn't use the personas. Some people like to try out different modes and slash commands and whatnot - but I am quite happy using the defaults and would rather (let it) write more code than tinker with settings or personas.