One flaw with this assumption is that images are available in literally counts of trillions to train on. With 3D models there are virtually no production quality models freely available to train on. Even companies like ILM or Weta have nowhere near the number of models that would be needed to train a robust modelling AI
The thing is CAD models look perfect. They are completely un-editable in that state however. You have to go back to the cad program to make edits to the original solid model.
No they don't. I am explicitly saying they often have the exact topology issues that are showcased here. They don't have some of the other issues - but a huge chunk of this article is about bad topology. I have imported hundreds of step files into CG software that have absolutely horrific topology, because one cares about producing real things and the other is about CG modeling.
> They are completely un-editable in that state however.
This is also untrue. I do plenty of hard surface modeling work on imported step files. Including the retopology I've talked about.
> You have to go back to the cad program to make edits to the original solid model.
If you need to modify it for production, yeah? But the whole use case here seems to be ecommerce website uses. In which case you can 100% take a cad file you've imported and modify it (if you managed to get a good topo out of it) for visual/aesthetic stuff.
(Some CAD software actually does a good job exporting - I have a moi3d license specifically because it has way better exports topology wise than solidworks or fusion. I build shit in solidworks and send it to moi before opening it in blender or houdini to do any render work)
That’s not really how 3D modelling works. You can’t just improve some of the model. You have to improve all of it. Fixing to top of the paddle also changes how the junction at the handle goes and so on. That’s why no one has solved ai 3D modelling yet. It’s like asking a gymnast to learn how to do the second half of a handspring first, and then for step 2 they can learn the first half. It doesn’t work like that.
The materials that go into a chip are nothing. The process of making the chip is roughly the same no matter the power of it. So having one chip that can satisfy a large range of customers needs is so much better than wasting development time making a custom just good enough chip for each.
They really aren't. Every material that goes into every chip needs to be sourced from various mines around the world, shipped to factories to be assembled, then the end goods need to be shipped again around the world to be sold or directly dumped.
High power, low power, it all has negative environmental impact.
ultra pure water production itself is responsible for untold amounts of hydroflouric acid and ammonia , and most etching processes have an F-Gas involved, and most plants that do this work have tremendously high energy (power) costs due to stability needs/hvac.
The claim was that "the materials that go into a chip are nothing". Arguing that that is not that case does not really put someone on the hook to explain or even have any clue how to do it better.
Maybe. They have the potential for faster semiconductors, but only after adequate modifications. Graphene isn't a semiconductor, and it isn't obvious that we'll find a way to fix that without (or even with) rare resources.
I'm not sure why you're asking this or what you're insinuating. The site is called Hacker News, it should be open to anarcho- and eco- hackers too. Not all of believe in infinite growth.
Do you want to expand on why you're on this site?
I've been here for more than 15 years and I'm not the person I was when I signed up or when I went through life in a startup.
I don’t agree with your music synthesizer analogy. I own a synthesizer, however I don’t possess any musical talent whatsoever. I cannot for the life of me produce anything remotely listenable from the thing. I know how to use it, but cannot make good music. You just need to look at some street performer banging on a plastic bucket and entertaining a circle of people to realise that the ability to make music is orthogonal to having the right tools.
AI art is more like me pressing the demo button on the synth, looking you in the eye as it plays the preset tune and saying “I made this”
Would you scream at a child that shows you a beautiful shell they found on the beach -- "you didn't make that!" -- why assume that everyone's ego is entwined with sharing?
No, because the child is behaving as a curator which is a valuable act. I never hear ai “artists” claim to be curators, they always claim to be creatives.
Photographers were not initially respected as artists -- they are now. The history of this cultural evolution is well documented. It is easier than typing a prompt to take a picture with a smartphone, yet the respect for photography somehow remains. It is definitely a cultural problem.
My theory about why it helps gamers to have a higher frame rate is that for something like a whip turn, with a low frame rate, your brain has to take a brief moment to work out where it ended up looking after the pan. But if your frame rate is high enough, you brain can keep updating its state during the pan because the updates are continuous enough not to lose “state” during it. This means when you finish the fast move, there is no delay while you reorient yourself for a few milliseconds.
reply