Hacker Newsnew | past | comments | ask | show | jobs | submit | more ericd's commentslogin

Right, do you want a King Tiger or 20 Shermans?


Yep. You see this play out not only in war, but in business and software as well.

e.g.

https://en.wikipedia.org/wiki/Worse_is_better


Absolutely, especially the part about just rolling your own alternative to Claude Code - build your own lightsaber. Having your coding agent improve itself is a pretty magical experience. And then you can trivially swap in whatever model you want (Cerebras is crazy fast, for example, which makes a big difference for these many-turn tool call conversations with big lumps of context, though gpt-oss 120b is obviously not as good as one of the frontier models). Add note-taking/memory, and ask it to remember key facts to that. Add voice transcription so that you can reply much faster (LLMs are amazing at taking in imperfect transcriptions and understanding what you meant). Each of these things takes on the order of a few minutes, and it's super fun.


I agree with you mostly.

On the other hand, I think that show or it didn’t happen is essential.

Dumping a bit of code into an LLM doesn’t make it a code agent.

And what Magic? I think you never hit conceptual and structural problems. Context window? History? Good or bad? Large Scale changes or small refactoring here and there? Sample size one or several teams? What app? How many components? Green field or not? Which programming language?

I bet you will color Claude and especially GitHub Copilot a bit differently, given that you can easily kill any self made Code Agent quite easily with a bit of steam.

Code Agents are incredibly hard to build and use. Vibe Coding is dead for a reason. I remember vividly the inflation of Todo apps and JS frameworks (Ember, Backbone, Knockout are survivors) years ago.

The more you know about agents and especially code agents the more you know, why engineers won’t be replaced so fast - Senior Engineers who hone their craft.

I enjoy fiddling with experimental agent implementations, but value certain frameworks. They solved in an opiated way problems you will run into if you dig deeper and others depend on you.


To be clear, no one in this thread said this is replacing all senior engineers. But it is still amazing to see it work, and it’s very clear why the hype is so strong. But you’re right that you can quickly run into problems as it gets bigger.

Caching helps a lot, but yeah, there are some growing pains as the agent gets larger. Anthropic’s caching strategy (4 blocks you designate) is a bit annoying compared to OpenAI’s cache-everything-recent. And you start running into the need to start summarizing old turns, or outright tossing them, and deciding what’s still relevant. Large tool call results can be killer.

I think at least for educational purposes, it’s worth doing, even if people end up going back to Claude code, or away from genetic coding altogether for their day to day.


>build your own lightsaber

I think this is the best way of putting it I've heard to date. I started building one just to know what's happening under the hood when I use an off-the-shelf one, but it's actually so straightforward that now I'm adding features I want. I can add them faster than a whole team of developers on a "real" product can add them - because they have a bigger audience.

The other takeaway is that agents are fantastically simple.


Agreed, and it's actually how I've been thinking about it, but it's also straight from the article, so can't claim credit. But it was fun to see it put into words by someone else.

And yeah, the LLM does so much of the lifting that the agent part is really surprisingly simple. It was really a revelation when I started working on mine.


I also started building my own, it's fun and you get far quickly.

I'm now experimenting with letting the agent generate its own source code from a specification (currently generating 9K lines of Python code (3K of implementation, 6K of tests) from 1.5K lines in specifications (https://alejo.ch/3hi).


Just reading through your docs, and feeling inspired. What are you spending, token-wise? Order of magnitude.


What are you using for transcription?

I tried Whisper, but it's slow and not great.

I tried the gpt audio models, but they're trained to refuse to transcribe things.

I tried Google's models and they were terrible.

I ended up using one of Mistral's models, which is alright and very fast except sometimes it will respond to the text instead of transcribing it.

So I'll occasionally end up with pages of LLM rambling pasted instead of the words I said!


I recently bought a mint-condition Alf phone, in the shape of Gordon Shumway of TV's "Alf", out of the back of an old auto shop in the south suburbs of Chicago, and naturally did the most obvious thing, which was to make a Gordon Shumway phone that has conversations in the voice of Gordon Shumway (sampled from Youtube and synthesized with ElevenLabs). I use https://github.com/etalab-ia/faster-whisper-server (I think?) as the Whisper backend. It's fine! Asterix feeds me WAV files, an ASI program feeds them to Whisper (running locally as a server) and does audio synthesis with the ElevenLabs API. Took like 2 hours.


Been meaning to build something very similar! What hardware did you use? I'm assuming that a Pi or similar won't cut it


Just a cheap VOIP gateway and a NUC I use for a bunch of other stuff too.


Whisper.cpp/Faster-whisper are a good bit faster than OpenAI's implementation. I've found the larger whisper models to be surprisingly good in terms of transcription quality, even with our young children, but I'm sure it varies depending on the speaker, no idea how well it handles heavy accents.

I'm mostly running this on an M4 Max, so pretty good, but not an exotic GPU or anything. But with that setup, multiple sentences usually transcribe quickly enough that it doesn't really feel like much of a delay.

If you want something polished for system-wide use rather than rolling your own, I've been liking MacWhisper on the Mac side, currently hunting for something on Arch.


The new Qwen model is supposed to be very good.

Honestly, I've gotten really far simply by transcribing audio with whisper, having a cheap model clean up the output to make it make sense (especially in a coding context), and copying the result to the clipboard. My goal is less about speed and more about not touching the keyboard, though.


Thanks. Could you share more? I'm about to reinvent this wheel right now. (Add a bunch of manual find-replace strings to my setup...)

Here's my current setup:

vt.py (mine) - voice type - uses pyqt to make a status icon and use global hotkeys for start/stop/cancel recording. Formerly used 3rd party APIs, now uses parakeet_py (patent pending).

parakeet_py (mine): A Python binding for transcribe-rs, which is what Handy (see below) uses internally (just a wrapper for Parakeet V3). Claude Code made this one.

(Previously I was using voxtral-small-latest (Mistral API), which is very good except that sometimes it will output its own answer to my question instead of transcribing it.)

In other words, I'm running Parakeet V3 on my CPU, on a ten year old laptop, and it works great. I just have it set up in a slightly convoluted way...

I didn't expect the "generate me some rust bindings" thing to work, or I would have probably gone with a simpler option! (Unexpected downside of Claude is really smart: you end up with a Rube Goldberg machine to maintain!)

For the record, Handy - https://github.com/cjpais/Handy/issues - does 80% of what I want. Gives a nice UI for Parakeet. But I didn't like the hotkey design, didn't like the lack of flexibility for autocorrect etc... already had the muscle memory from my vt.py ;)


My use case is pretty specific - I have a 6 week old baby. So, I've been walking on my walking pad with her in the carrier. Typing in that situation is really not pleasant for anyone, especially the baby. Speed isn't my concern, I just want to keep my momentum in these moments.

My setup is as follow: - Simple hotkey to kick off shell script to record

- Simple python script that uses ionotify to watch directory where audio is saved. Uses whisper. This same script runs the transcription through Haiku 4.5 to clean it up. I tell it not to modify the contents, but it's haiku, so sometimes it just does it anyway. The original transcript and the ai cleaner versions are dumped into a directory

- The cleaned up version is run through another script to decide if it's code, a project brief, an email. I usually start the recording "this is code", "this is a project brief" to make it easy. Then, depending on what it is the original, the transcribed, and the context get run through different prompts with different output formats.

It's not fancy, but it works really well. I could probably vibe code this into a more robust workflow system all using ionotify and do some more advanced things. Integrating more sophisticated tool calling could be really neat.


Parakeet is sota


Agreed. I just launched https://voice-ai.knowii.net and am really a fan of Parakeet now. What it manages to achieve locally without hogging too much resources is awesome



Handy is free, open-source and local model only. Supports Parakeet: https://github.com/cjpais/Handy


Speechmatics - it is on the expensive side, but provides access to a bunch of languages and the accuracy is phenomenal on all of them - even with multi-speakers.


I use Willow AI, which I think is pretty good


The reason a lot of people don’t do this is because Claude Code lets you use a Claude Max subscription to get virtually unlimited tokens. If you’re using this stuff for your job, Claude Max ends up being like 10x the value of paying by the token, it’s basically mandatory. And you can’t use your Claude Max subscription for tools other than Claude Code (for TOS reasons. And they’ll likely catch you eventually if you try to extract and reuse access tokens).


Is using CC outside of the CC binary even needed? CC has a SDK, could you not just use the proper binary? I've debated using it as the backend for internal chat bots and whatnot unrelated to "coding". Though maybe that's against the TOS as i'm not using CC in the spirit of it's design?


That's very much in the spirit of Claude Code these days. They renamed the Claude Code SDK to the Claude Agent SDK precisely to support this kind of usage of it: https://www.anthropic.com/engineering/building-agents-with-t...


> catch you eventually if you try to extract and reuse access tokens

What does that mean?


I’m saying if you try to use Wireshark or something to grab the session token Claude Code is using and pass it to another tool so that tool can use the same session token, they’ll probably eventually find out. All it would take is having Claude Code start passing an extra header that your other tool doesn’t know about yet, suspend any accounts whose session token is used in requests that don’t have that header and manually deal with any false positives. (If you’re thinking of replying with a workaround: That was just one example, there are a bajillion ways they can figure people out if they want to)


How do they know your requests come from Claude Code?


I imagine they can spot it pretty quick using machine learning to spot unlikely API access patterns. They're an AI research company after all, spotting patterns is very much in their wheelhouse.


a million ways, but e.g: once in a while, add a "challenge" header; the next request should contain a "challenge-reply" header for said challenge. If you're just reusing the access token, you won't get it right.

Or: just have a convention/an algorithm to decide how quickly Claude should refresh the access token. If the server knows token should be refreshed after 1000 requests and notices refresh after 2000 requests, well, probably half of the requests were not made by Claude Code.


When comparing, are you using the normal token cost, or cached? I find that the vast majority of my token usage is in the 90% off cached bucket, and the costs aren’t terrible.


Kimi is noticeably better at tool calling than gpt-oss-120b.

I made a fun toy agent where the two models are shoulder surfing each other and swap the turns (either voluntarily, during a summarization phase), or forcefully if a tool calling mistake is made, and Kimi ends up running the show much much more often than gpt-oss.

And yes - it is very much fun to build those!


Cerebras now has glm 4.6. Still obscenely fast, and now obscenely smart, too.


Aren't there cheaper providers of GLM 4.6 on Openrouter? What are the advantages of using Cerebras? Is it much faster?


You know how sometimes when you send a prompt to Claude, you just know it’s gonna take a while, so you go grab a coffee, come back, and it’s still working? With Cerebras it’s not even worth switching tabs, because it’ll finish the same task in like three seconds.


Cerebras offers a $50/mo and $200/mo "Cerebras Code" subscription for token limits way above what you could get for the same price in PAYG API credits. https://www.cerebras.ai/code

Up until recently, this plan only offered Qwen3-Coder-480B, which was decent for the price and speed you got tokens at, but doesn't hold a candle to GLM 4.6.

So while they're not the cheapest PAYG GLM 4.6 provider, they are the fastest, and if you make heavy use their monthly subscription plan, then they're also the cheapest per token.

Note: I am neither affiliated with nor sponsored by Cerebras, I'm just a huge nerd who loves their commercial offerings so much that I can't help but gush about them.


It's astonishingly fast.


Ooh thanks for the heads up!


What’s a good staring point for getting into this? I don’t even know what Cerebras is. I just use GitHub copilot in VS Code. Is this local models?


A lot of it is just from HN osmosis, but /r/LocalLLaMA/ is a good place to hear about the latest open weight models, if that's interesting.

gpt-oss 120b is an open weight model that OpenAI released a while back, and Cerebras (a startup that is making massive wafer-scale chips that keep models in SRAM) is running that as one of the models they provide. They're a small scale contender against nvidia, but by keeping the model weights in SRAM, they get pretty crazy token throughput at low latency.

In terms of making your own agent, this one's pretty good as a starting point, and you can ask the models to help you make tools for eg running ls on a subdirectory, or editing a file. Once you have those two, you can ask it to edit itself, and you're off to the races.


Here is ChatGpt in 50 lines of Python:

https://gist.github.com/avelican/4fa1baaac403bc0af04f3a7f007...

No dependencies, and very easy to swap out for OpenRouter, Groq or any other API. (Except Anthropic and Google, they are special ;)

This also works on the frontend: pro tip you don't need a server for this stuff, you can make the requests directly from a HTML file. (Patent pending.)


But it's way more expensive since most providers won't give you prompt caching?


I hope they make them quieter this time!


FYI someone made their own firmware which will drive the motor at a slower speed. Significantly reduces the noise.


I remember seeing that, though iirc it was a lot more surgery than I wanted to do on my blinds (which already had a shaky wife-acceptance-factor due to spotty zigbee connections). Thanks for the reminder, though.


I have non ikea electric blinds. Sure they’re a a little grindy but it’s like 30 seconds two a day.


I'd agree, except one of the main reasons I bought them was to wake up to natural light, not to wake up to WHIIIRRRRRRRRR.


OT, but leaving the zeros on those gigabit numbers makes this a lot less work to understand, at first I thought maybe you were in mbps throughout.


>They can't be treated like bicycles because they're too fast but aren't nearly as dangerous as motorcycles.

As a former rider, why? Cars were the most dangerous part, in my experience.

Something that stuck with me from my motorcycle safety course, the speed at which hitting a wall is 50% fatal is 30 mph. Doesn't take highway speeds.


The goal is not to protect the people on motorcycles, who (if we're being brutally honest) forfeited most expectations of safety as soon as they got on their bikes.

The goal is to protect the regular cyclists and pedestrians who they currently share paths with while trying to not make them TOO unsafe.


The "Yes"/"Maybe Later" school of governance.


That is the only way to run a government.

Consider for a moment what a government of "Yes"/"No Forever, without ever revisiting the question" would result in.

We aren't at the end of history.


Nobody’s talking about a blood oath to promise never to revisit the issue. But there’s a different between leaving the door open to future reconsideration, versus pushing consistently against the wishes of the public and only backing off temporarily for tactical reasons.

And for some reason, once these things pass, it’s a one way door. When does the US public get a chance to reconsider the Patriot Act?


The US public reconsiders it every time it sends a new congress in. Congress can repeal it in any session, they don't need to wait for it to expire.

Like, that's just the nature of representative democracy.


Well yeah, it's exploiting a problem in representative democracy. That doesn't work unless people become single issue voters on specifically that matter, and in that case, you can just screw over the public with something else.

The practice deserves every bit of scorn it gets.


It's not a flaw in representative democracy, it's a flaw in America as a whole. Most recently the public looked at the options before them, and chose to send in a slate of absolute lunatics in.

When you can't even figure out that having blatantly and openly vindictive and corrupt people in government is a bad idea, the fact that they aren't annually revisiting some legislature that's an issue for the 5% of the population that is the tech crowd isn't the problem. Like, it's a problem, but but it's not the problem.


This thread is literally about Denmark and the European Union.


It is, but the sub thread is for whinging about the Patriot act and why a representative democracy never gets the chance to repeal it. (Wherein I argue that it has plenty of chances, it just isn't an important political issue compared to, well, everything.)


The problem is the asymmetry. If the choices were «yes, but we can re-evaluate later» and «no, but we can re-evaluate later» then there wouldn’t be an issue. But especially with laws implemented at the EU level and not national level, it’s extremely difficult to get out of it after it’s been implemented. The choices are in practice, «yes, for the next foreseeable decades» and «no, for the next year».


As this very news item shows, it's not particularly easy to pass laws either; GDPR took over four years from the commission proposal to a final negotiated text.

We're now at over four years[1] since initial consultations were held and there's still not a formal consensus position in the council and the encryption bypass is explicitly excluded in the Parliament's draft, so it's not like we're particularly close to a law being enacted.

Basically the asymmetry you are describing is pretty exaggerated

[1]: https://eur-lex.europa.eu/legal-content/EN/PIN/?uri=CELEX:52...


The problem is that for government power expansions/individual rights reductions, "Yes" can in fact be taken to mean "Yes forever, without ever revisiting the question". (The mechanism needn't be that there is literally no formal revisiting; it can be sufficient that weakening government power is politically untenable because whoever proposed it will be held accountable for every subsequent bad event that could hypothetically have been prevented with some unknown additional amount of government power.)

Stasis is not great, but surely preferable to an authoritarian ratchet.


It was an allusion to the tech industry's disrespect for users, when they don't give an option to say no, and please stop asking me, because the company really really wants you to say yes, and what they care about is more important than what the user cares about.

I'm not suggesting that they never reconsider things, just those in government really seem to want it to happen, despite it being unpopular with the electorate, and so they try on a regular basis to get it to happen, despite the public outcry each time.


Well yes but even a no forever would be revisited under the right circumstances.

But what we do need is a wider no. Not just "no this highly specific combination of stipulations is not ok, let's try it again next month with one or two little tweaks". That's what we have now. Whack a mole. The problem with that is that once it passes they will not have a vote every month to retract it again, then it will be there basically forever.

What we need is a "No this whole concept is out of bounds and we won't try it again unless something changes significantly".


Yes, and (at least in the US) we're seeing this in other contexts too. Tons and tons of rehashes of laws restricting abortion, voting rights, or just executive actions that are slightly different from ones previously ruled invalid. The question is "yes" or "no" to what, exactly.


Politics should follow the exponential backoff model xD

Every time your law fails to pass you cannot revisit it for a longer period of time.

1year 5years 10years Etc

Means that laws with enough political will get passed, but bad laws can be more easily blocked.


This doesn't fit at all with how governance and politics works in reality. Rapid changes to society or a crisis can suddenly make deeply unpopular ideas very popular.


Great. Now, define how we can determine if two bills are the same 'your law' (Who decides? Lifetime-appointed partisan judges? The old legislature? The new legislature? The executive god-king?).

... And then figure out how to prevent poison-pill sabotage, because the best way to prevent a legislature from ever passing becomes 'deliberately draft a really bad version of it, and have your party veto it'.

Giving a one-time majority in a legislature a way to constrain anything the next 10 years of legislatures try to do is a terrible idea.


It's a reverse of what you're describing, but a similar mechanism like this in Canada is their notwithstanding clause.

If the Supreme Court of Canada rules a law unconstitutional, the government in power can overrule their ruling by using the notwithstanding clause. However, the notwithstanding clause override to keep the law in effect only lasts for five years. Subsequent legislatures have to keep renewing the override or the Supreme Court's ruling of unconstitutionality takes effect again.


> Giving a one-time majority in a legislature a way to constrain anything the next 10 years of legislatures try to do is a terrible idea.

There's no option to do that though. To block something for 10 years you'd have to stiff it at least 3 times, 1 and 5 years apart (which would mean doing it across at least two legislative terms).


I don't think you understand how legislatures around the world work, if you think this wouldn't be gamed to absurdity.

Important bills generally don't go to a vote unless everyone involved knows exactly how many votes they are going to get. Your proposal won't actually stop anything that a majority wants passed from passing - as long as a minority can't get ahead of them by poisoning the bill.

Bills are not single-issue. Any bill - even the best - can be trivially tanked by attaching a bunch of awful garbage to it. You are giving a single person (or whatever the minimum quorum is for putting a bill to vote) the power to kill, for years, progress on any issue - by putting forward their own version that's saddled with crap.

This would immediately be abused to disastrous effect.

You will end up with a complete farce, with the minority trying to outdo itself by coming up with the worst possible bills imaginable, that happen to include slivers of a majority's agenda. It's completely ass-backwards way to approach any decisionmaking process - because you are effectively giving multi-year issue veto power to any member of a legislature that's willing to embarass themselves by proposing garbage (that they don't actually want passed).

Or, worse yet, the majority will take the bait, and pass the bad bill anyway (because if they don't vote for it now, they won't get the chance to revisit the issue for years).


If it was that simple then no legislature anywhere would ever pass any bill already. Evidently there are countermeasures to these things.


>Consider for a moment what a government of "Yes"/"No Forever, without ever revisiting the question" would result in.

That's pretty much what the US constitution is. Once something's in it, it doesn't realistically get out of it.


The bar for adding something to it is the same bar for removing something from it. It's not 50%.


There's a big difference between hammering something down over and over again until protesters and opposition gives up and "situation has changed, lets revisit this".


Which is, tbh, a bad-faith tactic for wearing down the electorate. It’s similar to how Brexit advocates kept the issue alive until they gained enough momentum to push it through. Nearly a decade later, most of the promised benefits haven’t materialized, and the UK has borne significant self-inflicted economic costs.

Growth has slowed to a crawl (just over 1%), trade friction has choked countless small exporters, and the “take back control” slogan now sounds hollow when irregular immigration is still higher than ever, while industries that relied on EU labor, say, healthcare or agriculture, are struggling.

Even though public opinion has shifted toward rejoining the EU, it could take a decade or more to rebuild the political will — and any return deal would likely come with less favorable terms.


Wait, so people who maintain strong beliefs that disagree with you long enough to ‘win’ are acting in bad faith (brexit), but working for 10 years to re-enter the EU wouldn’t be?

That’s a tough bar to get past…


There’s an entropy factor involved though.

It’s easier to destroy things than to restore them.

We, the UK, will never be able to rejoin the EU on the same sweetheart terms as we had previously. That’s gone and can’t be replicated.

In much the same way as those campaigning for Scottish independence continue to campaign forever no matter how many referendums they loose, no one will be able to recreate the UK if they succeed.

You need the thinest majority to win and you can keep campaigning forever.

Which is why there was so much outside interference and breaking of the Brexit campaign rules. No matter the cost it can’t be reversed.


> It’s easier to destroy things than to restore them

No such rule exists. Historically, it's been almost impossible to remove any piece of regulation or bureaucracy once it has taken root. Radical dismantling of institutions is a rare thing. That's the same for public services or, say, chat control. I did not expect Brexit to succeed: in fact it only happened because David Cameron had a whimsical moment of fairness and respected a referendum result, against general expectations since he had nothing to gain.

Looking back up the thread, we're equating nagging to construct something (chat control) with nagging to dismantle something (UK EU membership). And I suppose Scottish independence would have aspects of both construction and destruction. The pernicious things that are hard to change are attractive-sounding policy ideas, whether they build up edifices or tear them down.


The issue was that support for "Brexit" was a bad-faith fabrication by Murdoch-owned media with a dash of foreign-funded interference.

When you put down any specific Brexit implementation and asked people to vote on it, you generally got supermajority opposition.

This is similar to, for example, the nitwits in Kentucky who fiercely opposed Obamacare but were vociferously supportive of Kynect and the ACA--all of which are the same thing.


It does read the way you describe in your question. My interpretation of OPs example is more about the asymmetry in how much more (relatively) feasible it is for one party to re-introduce a vote for something than it is to rally political will en masse in a way that reflects what the electorate ultimately wants.

An example that comes to mind is the string of legislation like SOPA that despite having lost, the general goal continued to appear in new bills that were heavily lobbied for.


You’re right. That aspect of how Brexit was carried through was not acting in bad faith. The anti-European faction has been fighting since we joined to reverse it. Many other aspects of the process were in bad faith but people must be allowed to change their minds, disagree, pursue their faith.


The real problem here is that it should be easier to take powers away from them government than to grant them.

If you have a system where passing a law requires three separate elected bodies to approve it, the problem is that it makes bad laws sticky. If a sustained campaign can eventually get a law passed giving the executive too much power and then the executive can veto any future repeal of it, that's bad.

The way you want it to work is that granting the government new powers requires all government bodies to agree, but then any of them can take those powers away. Then you still have all the programs where there is widespread consensus that we ought to have them, but you can't get bad ones locked in place because the proponents were in control of the whole government for ten seconds one time.


Constitutional clause that mandates sunsetting of laws could work for that.

Also, any sort of "vetoing direct democracy", where voters can repeal a law.


The first one mostly works but it generally has two problems. First, they just put "re-pass all the old junk that was about to expire" into this year's omnibus and then there's so much of it at once that the bad stuff gets re-enacted by default. That's better than the status quo but only a little. And second, you don't really want constraints on the government to expire. To some extent you can put those in the constitution, but a lot of this is things like anti-corruption laws that, if the current government is corrupt, they're not going to want to re-enact.

The second one is great. Direct democracy but you can only use it to repeal things. Let the general population veto the omnibus and make them go back and split it out.


As a EU citizen I'd ask for at least a 2/3 majority to let the UK back into the EU, maybe 3/4. They came, they were always skeptical, they left, they want to come back? Please demonstrate that you made up your mind and won't start thinking about another Brexit in less than 10 years.


Brexit can't just be undone. The UK would have to go through the full accession procedure. This would be much easier for the UK than for countries like Georgia, since the UK system hasn't diverged much, but the special agreements and exceptions the UK had would have be renegotiated from scratch.

Adding a new member state always requires unanimous consent from existing member states, for good and ill.


I was an EU citizen. Then I wasn't. Being an EU citizen means nothing.


> Growth has slowed to a crawl (just over 1%)

So like France and Germany?

> “take back control” slogan now sounds hollow when irregular immigration is still higher than ever.

1. Take back control was about a lot more than immigration - it was primarily about regulation. 2. It has stopped EU immigration which was far larger scale than illegal immigration and there was no way of refusing to allow people in or removing them.

> most of the promised benefits haven’t materialized

Nor have the costs. The government predicted an immediate severe recession if we so much as voted for Brexit, let alone implemented it.


You can just set up your own cloud on leased machines, and pocket the huge difference in cost. Devops languages are pretty easy to learn, IME, and the infra stuff takes less maintenance than the AWS proponents seem to think. I guess it depends on your usage profile, but like bandwidth especially is ruinously expensive compared to what you get with leased machines.


Something nonobvious to consider, 10G copper/RJ45 SFP modules run hot, to the point where our Mikrotik switch's manual mentioned that we could use them, but they strongly recommended only populating every other port, if we did. Heat wasn't a problem at all with the fiber ones.


I did a few rooms with fiber and copper for 10G, you don't need EMT, I found the blue flexible smurf tube perfect for this.


Later, they say “lithium ion batteries only have 4 to 6 hours of capacity”, which again, what? But maybe that implies that the actual capacity rating is their “capacity” x 4-6.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: