1. Early 2025: Identify Federal sites for AI data centers, clean energy facilities, and geothermal zones. Streamline permitting processes and draft reporting requirements for AI infrastructure. Develop a plan for global collaboration on trusted AI infrastructure.
2. By Mid-2025: Issue solicitations for AI infrastructure projects on Federal sites. Select winning proposals and announce plans for site preparation. Plan grid upgrades and set energy efficiency targets for AI data centers.
3. By Late 2025: Finalize all permits and complete environmental reviews. Begin construction of AI infrastructure and prioritize grid enhancements.
4. By 2027: Ensure AI data centers are operational, utilizing clean energy to meet power demands.
5. Ongoing: Advance research on energy efficiency and supply chain resilience. Report on AI infrastructure impacts and collaborate internationally on clean energy and trusted AI development.
Well, it would funnel tax dollars to already-established companies that don't need the subsidies. :p
I have some sympathy for certain domestic capabilities (e.g. chip fabrication) but this "AI" bubble cross-infecting government policy is frustrating to watch.
It's hard to know how much this effort is about the government's belief in AI, and how much of it is about supporting the technology sector while using AI as a convenient buzzword.
I think, though, that even if LLMs turn out to be a dead-end and don't progress much further... there are a lot of benefits here.
One of the US's key strategic advantages is brain drain.
We are one of the world's premier destinations for highly educated, highly skilled people from other countries. Their loss, our gain.
There are of course myriad other countries where they could go, many of them more attractive than the US in various ways. Every country in the world is in a sense competing for this talent.
As of this year, the US employment-based green card backlog for citizens of India is such that they're currently still processing applications filed in 2019 for the top EB-1 category (that's "Extraordinary People, Outstanding Researchers and Professors, and Multinational Executives and Managers"), and 2012 for mere PhDs. So the numbers would have to go down a lot for US to even notice.
Speaking as an immigrant myself, so long as there's still noticeable wealth disparity, people will make the jump. The other aspect that makes US specifically especially attractive compared to some others is its family immigration policy - people generally want their family to join them eventually, and US has an unusually large allotment for that compared to many other countries.
zero sum thinking has already infected public policy. turns out liberals can be just as "they took our jobs" as the redneck conservative.
the real fear should be that people wouldnt want to come. already chinese intl students are break even when considering US vs going back to china. who wants to deal with all the bureaucracy and hatred when they could just go back and work for deepseek.
>> Well, it would funnel tax dollars to already-established companies that don't need the subsidies.
Yeah it sounded like a gift to nVidia.
My prediction was that nVidia would ride the quantum wave by offering systems to simulate quantum computers with huge classical ones. They would do that by asking the government to fund such systems for "quantum algorithm research" since nobody really knows what to use QC for yet.
This move primes that relationship using the current AI hype boom.
So look for their quantum simulation-optimized chips in the near future.
GPU, gpgpu, crypto, ray tracing, AI, quantum. nvidia is a master at milking dollars from tech fads.
They'll be scouring the internet for signs of thoughtcrime and jamming the sources they find. Also, automating those sectors that tend to produce whistleblowers.
Because if they don't do the bad thing first, some bogeyman might become better at it than they are. Same logic that gave us the Manhattan project.
How else will we uncover what we could expose torified too so they flip out and lose their job so agent jackson can take their jerb by reading their hackernews comments and knowing all of the media they've been exposed too during the day?
Or try to make them have a heart attack by making a digital twin of them which synchronizes their sentiment, smart watch health data, and man-in-the-middling all of their digital conversations with creepy GenAI? Our adversaries might be doing it, so line up some fresh specimens. Come on bruh it's the future, you gotta think bigger.
I suspect we are not in a race for a better LLM, but a race to the singularity.
"I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.[4]"
Belief in a coming technological singularity is completely unscientific and based on zero hard evidence. It is essentially a secular religion with computers cast as deities to either usher in a new utopia or exterminate us all. Surely I am coming quickly, sayeth the Lord.
When I step through the logic in my mind, seems likely.
Lets make the leap of faith that we can improve our AIs to actually understand code that it's reading and can suggest improvements. Current LLMs can't do it, but perhaps another approach can. I don't think this is a big leap. Might be 10 years, might be 100.
It's not unreasonable to think there is a lot of cruft and optimization to be had in our current tech stacks allowing for significant improvement. The AI can start looking at every source file from the lowest driver all the way up to the UI.
It can also start looking at hardware designs and build chips dedicated to the functions it needs to perform.
You only need to be as smart as a human to achieve this. You can rely on a "quantitative" approach because even human level AI brains don't need to sleep or eat or live. They just work on your problems 24/7 and can you have have as many as you can manufacture and power.
I think having "qualitatively" superiority is a little easier actually because with a large enough database the AI has perfect recall and all of the worlds data at its fingertips. No human can do that.
Let's say all of that happens. Is there any reason to believe that will lead to "a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence"?
Or is it more reasonable to suppose that 1) all those improvements might get us a factor of 10 in efficiency, but they won't get us an intelligence that can get us another factor of 10 in efficiency, 2) each doubling of ability will take much more than doubling of the number of CPU cycles and RAM, and 3) growth will asymptotically approach some upper limit?
A lot of tech improvements are s-curves or waves of s-curves, not exponentials. For a given problem domain you will still have s-curves, but for overall intelligence/progress it might be exponential in some overall sense.
Yeah sure, there are physical limits, and doing things like mining, manufacturing, and logistics are really slow compared to just thinking. But that's just time and money. I don't see why it wouldn't happen. You build a machine to make machines. First you have 1, then 2, then 4, then 8, then 16 etc.
Then when the earth is used up, you look at space. Space travel is a lot easier because you don't need to keep a meat bag alive.
I for one am not worried about this. We still have trouble building robots that can wash dishes autonomously, let alone build nuclear power plants autonomously. The key word here being autonomously i.e. without human intervention, which is what a superintelligent machine would require to grow into this singularity.
I'm not "worried" about it. Just observing that the race is on. I don't think it's necessarily a bad thing as long as the tech is in the hands of "the good guys".
I think as soon as you can get an AI to break a task down into smaller tasks, and make itself a todo list, you have the autonomy. You just kick it all off by asking it to improve itself. I doesn't have to "want" anything, it just needs to work.
In the micro, nothing. We'd have a bunch of extra cash we can spend on other stuff.
In the macro?
The overall goal is keeping the US competitive in technology. Both in terms of producing/controlling IP as well as (perhaps even more crucially) remaining a premier destination for technologists from all over the world. The cost of not achieving that goal is... incalculable, but large.
Whether or not this is a good way to achieve that goal is of course up for debate.
It's not just the economic cost that is a consideration here. The military applications of AI are potentially staggering (as are the associated ethical concerns - but, as with any military tech, once someone does it and the advantages become clear, others will inevitably follow). No major power player is going to want to risk getting drastically outpaced by its primary competitors.
Note that this isn't a hypothetical, either. Israel is already using AI to pick targets. Ukraine is already using AI-controlled drones to beat jamming. There's no indication that either one intends to stop anytime soon, which tells the others that the tech is working for this purpose.
It doesn't matter what the incoming admin thinks, it just matters what makes the most economic sense. If interconnection queues are backed up and the grid is lacking the electricity required, offgrid might be the way to go: https://www.offgridai.us/
People assume gas plants are the fastest and easiest to build and power but you have to get the natural gas to the plant. Nat gas pipelines take years to build and also there are gas turbine supply issues. Offgrid solar + battery solves all of these.
That's actually a wildly short timeframe for a large political plan like this. Going from idea to breaking ground on a massive infrastructure project in a calendar year? It'll be interesting to see if this turns into a political/legal quagmire or not, since it's ostensibly aligned with both parties (donors) interests.
Agreed, it seems wildly optimistic to me. I was working at a military installation command last year and it would often take 6 months just to survey some existing buildings for renovation + repurposing and trying to get those plans approved. Stuff like "repaint these walls, mitigate mold, and install sub-divided chain-link fence in this warehouse" was busting our timelines. I'm assuming that just getting the paperwork right for a Federal government construction project is an equally-huge lift.
This is pretty ridiculous. US is already an undisputed world leader in building data centers. Our companies are great at building data centers already at their own initiative, and they don’t need government’s subsidies to do that. This just seems like a subsidy to “clean energy” industry, disguised as industrial strategy.
That schedule is impossible. Given the amount of power data centers take, and the fact that the government is new to building them, the environmental review process and lawsuits will take years.
I agree in general, but equipment mainly due to lead times and not permit/etc times.
Remember it's federal land, so they can't be held to state permitting or building codes, etc, only what they choose to (IE if their agency adopted it explicitly)
Having seen lots of datacenters constructed over the years - it's tractable in terms of the bureaucracy parts if they want it to be - because they can mostly ignore them.
So for me it breaks down like:
Building construction, it could be done.
Power provision - hard to say without knowing the sites. some would be clear yes, some would be clear no. Probably more nos than yeses.
Filling it with AI related anything at a useful scale - no, the chips don't exist unless you steal them from someoene else, or you are putting older surplus stuff in.
I’m not an environmental lawyer, but you don’t think these will be subject to NEPA? It sounds like they’re trying to piggy-back off land that was made available for solar projects under an existing EIS, but it sounds that those decisions could be litigated.
Automobile manufacturers can bring a plant from foundation to production in under a year, it’s not that far-fetched. A datacenter is orders of magnitude less complex.
My understanding is that it's not the data center that's the problem, it's the energy required to run them. This is especially demanding with us rubbing AI over everything and using insane amounts of energy in the process.
Ie: making a new data center is easy. Making new power plants quickly - not so much. But hey, at least there's some renewed political will, better than nothing.
I don’t have sources to back it up, but am reasonably sure these factories consume even more energy than a datacenter, possibly in the GW range. Powering robots, welders, and all kinds of mechanical systems in addition to electronics is very power intensive.
As another data point, Apple has been doing 100% renewables for their DCs since 2014. A wind farm can definitely be built in months, and energy companies will always follow the money. Site selection will definitely take energy availability into account as well.
"Updated Wednesday, September 4, 2024 5:10 p.m. EST - Amazon reached out to deny the reports of a crack down on singing along with the radio in trucks and provided this PR video clip as evidence. A PR spokesperson told Jalopnik: “This post is completely inaccurate. Amazon has never issued guidance or communications to Delivery Service Partners that prohibits singing in the vehicle.”"
PEP 638 is exciting because it introduces syntactic macros to Python, allowing developers to customize and extend the language's syntax. This makes it easier to create specialized features for specific projects without changing Python itself. It opens up new possibilities for innovation and experimentation, making Python more powerful and flexible.
Isn't the risk that you make python no longer look like python? At what point should someone simply choose a different language better suited for that particular purpose?
We are also only using it because of GDPR requirements of not transferring data outside of the EU. The slowdown is huge, preventing any chat use-case where users will get bored waiting for a 10 second response to trickle through.
That OpenAI themselves aren't spinning up a data center in the EU is beyond strange for me. The only thing I can imagine is that if this is stopped by some legal provision as part of Microsoft investing, but it has clearly not worked well for OpenAI which now has to deal with Microsoft's shitty delivery of a subpar OpenAI-branded service for EU customers.
We're currently using the swedish region btw, where model updates have been quite fast (weeks after official launch). Still, the slow response times makes them non-viable anyways...
Gambling site or not: Cloudflare took their money for years, failed to communicate any problems, then deleted their data when they didn't accept their "enterprise deal". There's nothing saying that they won't do the same to ANY of their other thousands of customers, many of who reads this forum...
I’m curious how you find the Swedish from different models. GPT-4 seems to return perfectly grammatical Swedish but a Swede friend says it reads like English. Do you notice this?
I’d love to have models that are better at idiomatic usage of other languages, so they can generate language learning content.
1. Early 2025: Identify Federal sites for AI data centers, clean energy facilities, and geothermal zones. Streamline permitting processes and draft reporting requirements for AI infrastructure. Develop a plan for global collaboration on trusted AI infrastructure.
2. By Mid-2025: Issue solicitations for AI infrastructure projects on Federal sites. Select winning proposals and announce plans for site preparation. Plan grid upgrades and set energy efficiency targets for AI data centers.
3. By Late 2025: Finalize all permits and complete environmental reviews. Begin construction of AI infrastructure and prioritize grid enhancements.
4. By 2027: Ensure AI data centers are operational, utilizing clean energy to meet power demands.
5. Ongoing: Advance research on energy efficiency and supply chain resilience. Report on AI infrastructure impacts and collaborate internationally on clean energy and trusted AI development.