He was pretty upfront with the the possibility that his problems with rust were of his own making. The places he tried to leverage conditional compilation could be charitably described as "creative", and would raise eyebrows even in a language like C - where the preprocessor is relatively unconstrained. I'm not familiar with his project beyond the snippets he shared, so he may have had good reason to effectively ifdef inside a function call instead of any of the more traditional locations.
Yeah, the HDSP-2132 is a common display component in military hardware. I'm sure that readability figures into it (especially for NVGs), but so does hardiness. My hopes of an 80x24 dumb terminal were dashed when I finally learned the unit replacement price... I don't need a rad hardened serial display that bad.
I've been looking into those Avago/Broadcom dot matrix displays recently. I don't think it'd be too hard to recreate something similar with tiny SMD LEDs on a PCB and a slightly intelligent driver board, for much cheaper. Would there be a market for something like that?
Only if you can get them listed at Digi-Key/Mouser/etc. Even if it's under a "maker" brand like Adafruit, Sparkfun, etc. Anyone who can replacement TIL311 and friends displays in there, and advertise them, will sell quite a few.
They're not hard to make, but they're the sort of thing that no one really wants to make in-house, since they're a means to an end rather than an end in themselves. Great thing to sell, though!
In grad school I taught a electronics lab class where the students would burn through a bunch of HDSP-0772 every semester. I thought about coming up with a replacement, but at the time you could still get them for a somewhat reasonable price (like $10 or $20?).
At today's price of $50 it might be cost effective to do as you say, but I imagine there's not a huge market.
Not rad hardened, and not quite 0.75mm, but you can buy a 1.25mm pitch LED matrix panel for a relatively low cost. It's an RGB panel instead of monochrome for the HDSP-2132.
Unless I'm misreading the page, it is claiming a resolution of 104x78. So that means 8,112 LEDs.
The HDSP-2132 is made of 8 character elements, each element has a resolution of 5x7. So a 80x24 terminal would have a resolution of 400x168... 67,200 LEDs.
So about a magnitude off, even before ignoring the built in character spacing :)
I wasn't suggesting one of them would settle the matter. I was suggesting it would be cheaper to use them instead. They are made to be grouped together, and daisy chained to a single controller.
I believe each module would be 160 x 120 dots (19,200 dots total) though, based on module size and LED pitch. I'm unclear on where you got 104x78.
It is odd, I've been involved in the cryptocurrency scene from the beginning, and I've seen the whole "Bitcoin kills Mother Gaia" talking point pop up every couple of years - but I've never seen such prolonged parroting as this last cycle. It is a silly complaint, because it never considers the wastefulness of the status quo. But it is an interesting method of attack, trying to conceal the fact that it is a call for others to act against their own interests, while also tickling their envy. It could be a dangerous gambit though, while most people can't see far enough ahead to realize that global carbon caps effectively freeze the international market's structure, they can see when somebody is ham-fistedly screwing with their money.
That’s a dead easy argument to debunk. If each visa payment used 600kWh like a Bitcoin transaction visa alone would consume 3x as much power as the entire world generates and produce 100% of the worlds ewaste. So no, the status quo is orders of magnitude more efficient on a unit basis by definition.
virtually all of those services will still have to exist under crypto. Regulatory frameworks will still exist, centralized entities will provide lightning channels to small-fry actors, firms will still host servers to execute trades based on the network, etc.
Yes, but they will be dealing with a currency that has mathematical guarantees - which changes things entirely. Imagine a scenario in which self driving cars are not only the norm, but they've achieved a perfect safety record due to an open source, formally verified, code-base. In that scenario, do you think the National Highway Traffic Safety Administration still needs 600 employees and a 900 million dollar budget? I have no doubt that it would still exist, but it would be addressing an infinitely simpler problem - and the reduction in resource requirements would cut it down to a skeleton crew that would operate much like stubcode in the wake of a refactor.
600 employees is nothing. That’s a skeleton crew already. Yeah they’d probably still keep them on payroll to work on developing and maintaining safety standards.
That aside you’re not operating with facts just speculation. You’ve not quantified what you think the fully realized cost is today or what it would go down to in the future.
I’m not. I’m saying by scaling just one small corner to bitcoins inefficiency it cannot be true as it would use 3x the worlds power supply. It cannot be true by induction. If we scaled those aspects up too we’d be taking hundreds or thousands of times more power than the world generates. By doing that id actually be making my argument stronger.
Yep, I agree completely. I’ve seen the argument that the status quo is somehow less efficient on a per transaction basis than Bitcoin is today, and that’s thermodynamically impossible.
Bitcoin's transaction throughput is independent of its energy consumption. If its block size limit was raised to allow Visa-scale transaction throughput, its per transaction energy cost would be 1/1000ths what it is now.
Bitcoin Cash forks Bitcoin to provide a block size limit that allows these kinds of throughput levels, while Ethereum enables both sophisticated transaction compression methods and layer 2 models, that can achieve Visa-scale throughput without raising layer 1 block sizes.
So attacking the cryptocurrency concept based on Bitcoin's peculiar shortcomings is misguided.
When you blow up the transaction limit you physically centralize the verification process. This is why you'd see miners gamble with sitting on a solved block and secretly beginning the next search, with a few seconds of head start, before announcing to the network, or skipping the inclusion of any transactions: because fractions of a second make a big difference. Bigger blocks propagate more slowly, and magnify the advantage of employing those kinds of undesirable behaviors. This is why HFT boxes end up as physically close to Wall Street as possible. This is also why bcash got so much early support from Chinese miners.
>>When you blow up the transaction limit you physically centralize the verification process.
You could 500X the transaction throughput and running a full node would only require a 25 Mbps internet connection.
>>Bigger blocks propagate more slowly, and magnify the advantage of employing those kinds of undesirable behaviors.
Miners do not propagate blocks all at once. They use propagation protocols like compact blocks to only propagate the transactions not already in other nodes' mempools: https://ieeexplore.ieee.org/abstract/document/8922597
The transactions fill the mempool over the course of the entire on average 10 minute duration between blocks, meaning that by the time a block is discovered, other nodes already have almost all of the transactions in that block.
The 'validation free mining until the new block is fully propagated and validated' strategy further mitigates the competitive disadvantage miners face when receiving large blocks from other miners.
All-in-all, centralization concerns do not justify preventing Bitcoin blocks from growing to meet market demand up until they reach at least 500 MB (ie. 3.3 MB/s of transaction throughput). The current 1 MB - or 1.6 MB assuming full SegWit adoption - limit, which only allows for 2 KB/s of transaction throughput, is absurdly inadequate, and makes Bitcoin's current energy consumption absurdly wasteful.
You are overlooking the disadvantage of not verifying the prior hash, which you can't do until you can verify hashMerkleRoot, which you can't do until you have collected the entire block. If you skip verification, you open yourself up to bad actors feeding you a bogus header - and you only figure that out after being drip fed a bunch of transactions that don't match the merkle root. Erasure coding doesn't fix that. In any case, proponents of the 'cram everything in the public ledger' method have already advocated 128MB and even completely uncapped block sizes... so the outcome is predictable.
Take a hint from every successful network that has ever networked since ever: hierarchy, extensibility, etc. Demanding that it all be crammed into a homogenous block is so obviously doomed to fail that you should be suspicious of the proponents' motives.
Where are you getting ideas like this? How fast do you think internet connections are?
Only in the context of bitcoin do people start thinking 250 byte transactions are somehow difficult to send over gigabit connections. The average bitcoin block is 900 -kilobytes- at most at 10 minutes on average.
How is it that people have absorbed propaganda that makes them actually think that causes anyone a problem? blizzard.com is 13MB, gfycat.com is 29MB. Who are these miners that are struggling to send out 900KB ?
> Where are you getting ideas like this? How fast do you think internet connections are?
As fast as the packet source wants? The only way to defend against a slowloris DoS is by accounting for it at the application layer, which is both unusual and difficult - as DoS attacks are usually handled at the transport layer. In this case that could mean applying a deadline for the block announcement in its entirety - which means mandating a minimum connection speed, and classifying every connection that dips below that as an attack... so you better have a good SLA with your upstream network. Well, that is no problem for centralized operations... Guess what happens to cryptocurrencies that adopt massive 128MB+ blocks in order to increase their transaction throughput - they become incredibly fragile as the slightest amount of packet loss starts waves of peer bans.
> Who are these miners that are struggling to send out 900KB ?
Anyone who wants to handicap a competing pool by forcing them to either wait for the complete false block, or waste time hashing on a fake block header.
When are you going to admit that you haven't thought this through?
Now you are trying to say that for after 11 years, miners will for some reason sit and wait for someone that isn't sending them a block fast enough and that other miners will attack people like that.
No one is waiting. If someone tries to send out a block slowly, someone else mines that block and sends it out fast. This is not difficult to understand.
It's amazing all the made up nonsense you've thrown out to try to predict a future that has already come and gone. Why are you so desperate to prove something so ridiculous? What you are talking about doesn't happen at all at any throughput on any cryptocurrency.
Check the reference client buglist, they got hit with a slowloris resource exhaustion combo. They fixed the resource exhaustion (kind of), they did not address the slowloris vulnerability. So again, you have no idea what you're talking about. Doubly so if you didn't write the backend of a major mining pool - because you're just running your mouth about code you haven't seen otherwise.
Are you actually saying that a bug in bitcoin is a reason that no cryptocurrency can scale to many times more transaction throughput? Why do ethereum and bitcoin cash work so well?
Did you write the backend of a mining pool and build in a bug where you wait for a block to be sent slowly? That's a pretty crazy mistake to just wait on a single connection.
>>If you skip verification, you open yourself up to bad actors feeding you a bogus header - and you only figure that out after being drip fed a bunch of transactions that don't match the merkle root.
Malicious actors can't give a bogus header, because you can verify the bogus header doesn't have enough PoW. If the malicious actor does produce sufficient PoW for their bogus header, they are foregoing massive block rewards to fool your node for a few seconds. That's why this kind of malicious activity doesn't happen, and why miners do validation free mining on a new header until they've verified the whole block.
That doesn't adrress my point at all. You can verify the requisite proof of work was done just by hashing the data in the block header to see if it meets the difficulty requirement.
The malicious actor cannot fake that. They have to produce massive amounts of proof of work to find a value in the nonce data field in the block header that results in that hash meeting the required difficulty, and even if the malicious actor delays transmitting the whole block, they cannot delay transmitting the block header, or other nodes will not even bother trying to extend their block.
So all miners can immediately verify if the block headers they are receiving have the required PoW behind them, and it would be very costly for a malicious actor to generate fake block headers with enough PoW to be accepted, while giving the malicious actor very little advantage in an attack, meaning it would be a completely impractical and ineffective way to attack way to attack Bitcoin.
What kind of a response is this? They said anyone can verify the proof of work and you are saying they can verify the proof of work. Where is the contradiction?
Verifying blocks is trivial. You aren't still saying that needing to receive 900KB is a big deal are you?
You seem to be acting like there is some sort of prediction that there will be a problem. Ethereum already produces blocks every 13 seconds at three times bitcoin's throughput.
I said that you can't verify the header until you have downloaded the entire block - meaning the time between a false header being submitted and the last transaction in the block being downloaded is time the miner either abandons its work on the last real block and then sits idle or risks hashing a new block prior to verifying the false header. He implied that you can somehow verify using the PoW. I demonstrate that you cannot. I'm not sure how you could have missed it.
> When you blow up the transaction limit you physically centralize the verification process.
What numbers are you basing that on? Bitcoin is 1.5KB/s. A $10 vps is 80,000 times faster than that.
> because fractions of a second make a big difference
No they don't. The examples you are talking about are rare and have very little impact. If the entire network makes blocks once every 10 minutes on average, each miner finds blocks much less frequently.
> Bigger blocks propagate more slowly, and magnify the advantage of employing those kinds of undesirable behaviors. This is why HFT boxes end up as physically close to Wall Street as possible.
Where are your numbers here? How long do you think it takes to send around 900KB ?
A single twitch stream will propagate more than that around the world every second to thousands of individuals. Cryptocurrency only has to send the same tiny amount of data every few minutes to miners.
High frequency traders trying to be physically closer to a connection are operating on the scale of single micro seconds. You are comparing that to something 600 MILLION times slower.
> What numbers are you basing that on? Bitcoin is 1.5KB/s.
You know you can't verify an incomplete block, right? If you think you can just plow ahead after getting the first packet... you are going to have a really bad day once the other miners notice that you can be fooled into wasting hash power on bogus headers. You know that blocks get announced by the miners after they solve the hash, in chunks that can be up to the max blocksize, right? For Bitcoin that is 1MB, for bcash it is 32MB. Imagine if NASDAQ offered two levels of service: one that put the FIX protocol behind a 1MB buffer, and the other behind a 32MB buffer - which one would be disadvantaged? Anyone sitting at the 32MB buffer would be getting their lunch eaten. This is about as fundamental as it gets, so you might want to do some reading - because you've clearly got a big blindspot.
> No they don't. The examples you are talking about are rare and have very little impact.
If you are talking about bitcoin, you are very wrong - it is a thoroughly documented occurrence, it even has a name: the selfish miner strategy. If you are talking about bcash, I dunno one way or the other - I don't pay much attention to the joke coins. https://doi.org/10.1109/ICBC48266.2020.9169436
> How long do you think it takes to send around 900KB ?
Again, you need to actually do some reading on how mining works and what the network propagation characteristics are. I really don't want to have this come off as meanspirited, but it needs to be said: you obviously thought you knew how things worked, and you obviously don't - I wonder how many people you've misinformed.
> you are going to have a really bad day once the other miners notice that you can be fooled into wasting hash power on bogus headers.
What are you even talking about here? 900KB takes a fraction of a second to transfer on a modest internet connection, any miner with $10 a month for a VPS can transfer that is a millisecond. No one said anything about downloading fractions of blocks, you hallucinated that.
> Anyone sitting at the 32MB buffer would be getting their lunch eaten. This is about as fundamental as it gets, so you might want to do some reading - because you've clearly got a big blindspot.
What is it that you think is even happening? Blocks get sent out. One has 32x the transaction throughput. Why do you think 32MB or more would be anything other than trivial to send and receive? Any miner can send and receive that in a second.
> I don't pay much attention to the joke coins.
Bitcoin cash already has more sustained transactions than bitcoin. Maybe you should reach beyond the propaganda of /r/bitcoin
> it even has a name: the selfish miner strategy.
Miners that do that take the risk that someone else will propagate a block before them since they didn't share the previous block they found. Empty blocks have much more of an impact on bitcoin because the throughput is so constrained. Empty blocks do not have much of an impact on the usability of bitcoin cash because it has plenty of throughput. A miner maliciously mining empty blocks on bitcoin will also leave a lot of money on the table because they won't get transaction fees.
> you obviously thought you knew how things worked, and you obviously don't
Everything you said is either completely made up or an exaggeration of some minor effect. In your last message you compared something with a 10 minute window to millionths of a second. For some reason you ignored this and ignored the actual numbers of modern bandwidth. You keep using hyperbole but won't quantify anything because the numbers don't add up.
> What are you even talking about here? 900KB takes a fraction of a second to transfer on a modest internet connection...
You are assuming that there are no bad actors in the network who might not want to transfer a block in a fraction of a second, and don't have to because they control the packet rate. Also you seem really hung up on this 900KB figure, you've repeated it in 8 recent posts. You know bitcoin's 30 day average block size has been above 900KB since 2018-10-24? But why would you harp on bitcoin's block size when you are advocating for a max block size that could accommodate all of Visa's transactions? Is is because you know that a 300MB+ block guarantees the death of decentralization and you'd look ridiculous pretending otherwise? To quote you before you presumably exchanged your BTC to double down on your bcash airdrop:
"Once again, whatever people's views on bitcoin at least realize that mining hashes don't dictate how many transactions can be processed by a miner. They mine a block, then they include as many transactions as they think they can get away with and keep the latency low enough to propagate."
Ah yes, that has worked out great for several exchanges and various other cryptocurrency related services. The cloud has been great, fill it with all the things.
> No one said anything about downloading fractions of blocks, you hallucinated that.
You don't seem to understand how packet switched networks function either. Go lookup "MTU", I look forward to hearing how even the most modest internet connection enjoys IPV6 with a 300MB jumboframe policy spanning the entire network path!
> Why do you think 32MB or more would be anything other than trivial to send and receive? Any miner can send and receive that in a second.
It isn't that it'd take them 10 minutes to download the entire block... it is that they are at a 0.92 second disadvantage to their competition - this drives the physical centralization.
> Bitcoin cash already has more sustained transactions than bitcoin. Maybe you should reach beyond the propaganda of /r/bitcoin
lol, the salty salty tears.
> Miners that do that take the risk that someone else will propagate a block before them since they didn't share the previous block they found.
And yet they do it, because it has been mathematically demonstrated to be a winning strategy - even before block bloat induced propagation delay. The rest of the statement is simply nonsensical.
> In your last message you compared something with a 10 minute window to millionths of a second. For some reason you ignored this and ignored the actual numbers of modern bandwidth. You keep using hyperbole but won't quantify anything because the numbers don't add up.
Yeah, you clearly don't understand the comically simple idea that a consistent head start on your competitors guarantees eventual domination. The 10 minute window is completely immaterial to the point. If you can consistently verify the newest announced block 1 second before every other miner - then your average solve rate is guaranteed to be 1 second faster than their average solve rate.
> You are assuming that there are no bad actors in the network who might not want to transfer a block in a fraction of a second,
No, you are assuming that it matters if people try this. Who cares if someone sends slow? No one waits for them. They send a block slow, someone else mines the block and sends it out fast. You have hallucinated this attack, it doesn't happen. Show me evidence of this having any effect if you can.
> bitcoin's 30 day average block size has been above 900KB since 2018-10-24
This is not true. 900KB is where the MAX size of the blocks. The average is much lower from variance. This is not difficult to see:
> But why would you harp on bitcoin's block size when you are advocating for a max block size that could accommodate all of Visa's transactions?
Where did I say any of that? All I've done is point out that the things you say have no evidence and don't even make sense.
> Ah yes, that has worked out great for several exchanges and various other cryptocurrency related services. The cloud has been great, fill it with all the things.
This is just vapid sarcasm. There are no problems with exchanges. If you have evidence, show it.
> You don't seem to understand how packet switched networks function either. Go lookup "MTU", I look forward to hearing how even the most modest internet connection enjoys IPV6 with a 300MB jumboframe policy spanning the entire network path!
Focus please, confront the topic at hand, stop with irrelevant nonsense.
> It isn't that it'd take them 10 minutes to download the entire block... it is that they are at a 0.92 second disadvantage to their competition - this drives the physical centralization.
Show evidence that this happens and is a problem to anyone.
> lol, the salty salty tears.
This is not evidence of anything. Ethereum and bitcoin cash do have more transactions going through them than bitcoin. I will show you the actual information:
> And yet they do it, because it has been mathematically demonstrated to be a winning strategy - even before block bloat induced propagation delay. The rest of the statement is simply nonsensical.
It isn't a winning strategy, it is a mild attack that accomplishes very little and loses money from not including transactions. Bitcoin cash does not have capacity problems so people don't even notice. Sitting on blocks accomplishes very little since the rest of the network is more likely to out mine you as time goes on.
> If you can consistently verify the newest announced block 1 second before every other miner - then your average solve rate is guaranteed to be 1 second faster than their average solve rate.
The time it takes to verify a block is trivial. If it was so important, why aren't all the blocks empty?
I've never seen such prolonged parroting as this last cycle.
The scale of the thing is getting out of hand. Cryptocurrency mining is eating up power on the scale of a small country, a sizable fraction of GPU manufacturing capacity, and a noticeable fraction of wafer fab capacity.
This has some parallels in how Spain went broke mining gold in the Americas in the 1500s. Expeditions went to the New World and gold came back. They were rich! Well, no. They had only created inflation by creating a gold glut. At a much higher operating cost than printing money, too, because it took ships and armies to make this happen.
I've never seen an estimate that withstood any kind of scrutiny. I've seen estimates based on both fundamental misunderstanding of the function of mining, and wild assumptions regarding miner behavior and out of date asic specs. Have you bothered fact checking any of those claims you just made? I mean actually reading the papers they cite, and learning the methodology they used. Because while you're thinking about conquistas, I'm thinking about the field of cybernetics in the 70's, and how it killed itself dead with ridiculously flawed methodology for modeling population growth - as the Davos types all nodded in agreement.
"Give me a one-handed Economist. All my economists say 'on the one hand...', then 'but on the other..."
Oh and it gets better: known bad data is still repeated, despite the poster being told why it was bad data - over a year ago. Pointing this out results in flagging, for reasons that have nothing to do with ego or narrative continuity... surely.
It would be very nice if you could search based on file name, as you can with btdig.com. If you could somehow provide BRE without melting down, that would definitely put you way ahead of the other DHT index attempts.
At the very beginning (of January) search was done even through filenames, but then:
1. It turned out, that such search requires quite a lot of disk space for index.
2. It is not very useful without showing the actual matched filenames. This requires more code, I didn't have time to create it at the time.
Yup, the most resource constrained form (PCRE2 would be the Cadillac-DoS-my-webapp option). I don't know how abnormal my problem is, but I've run into several cases where I needed a file that had a very specific naming structure - but a couple of substrings needed to be masked because that was the information I was looking for. That can easily be handled with two fixed string passes on a local file system, but not so much at the scale we're talking here.
As far as the indexing issues you mentioned, I'm not a python guy, so I can't recommend a drop in solution. But I have indexed pretty massive string datasets, and you definitely want to select an index method that was specifically devised for the key/value datatype you intend to ingest. So hashmaps are probably out :) It would also pay to adapt either the implementation or the data itself. For example, say you had a bunch of font file name you wanted to index (XyzSans.otf, AsdfSans.otf, AsdfSerif.otf, etc): a prefix tree would be a pretty good fit, especially if you reversed all the strings.
In principle it is possible, but there would be some limitations:
1. Postgresql supports indexing to make regex searches faster from v9.3 (the current version is 13): https://www.postgresql.org/docs/12/pgtrgm.html . However, it makes them faster only in some (simpler) cases, in others we are back to plain old full table scan.
2. Speaking of full table scan, it is not that bad of an option, the average list of file paths for a torrent file is approximately 6 KB, that's 300 GB per 50 mln torrents, that's completely within reach of some VPS providers (like BuyVM). Still, up to a few minutes per one pass.
3. So, unless I find some efficient technology for regex search in large volumes of text, I would be able to implement only "offline" search (submit query, receive link to search results in 5-30 minutes, depending on server load).
4. Another option would be to provide listings of files for downloading as csv files (torrent_id, filepath) and let user to use command line for searching (or some text viewer/editor, though most popular ones still load all contents into ram). Compressed size would be around 1 KB per one torrent file, that's near 50 GB per 50m torrents.
Yeah, it isn't something that is feasible if you can't tailor the infrastructure to the underlying data. An SQL backend is about as generic as it gets, hence the poor results. Enterprise DB deployments get away with it because they can justify throwing a lot more hardware behind a centralized generic data store.
Give that second link a closer look if you change your mind. It demonstrates a way of indexing 1.6 billion 80B strings in 24GB, and then returning the result of a fixed string search in 100ms. That is the reward you get when you venturing outside of the LAMP stack: less resource consumption, greater performance, increased utility.
More an IBM archeology question: aside from the scattered projects to catalog specific product lines, are you aware of a more general effort? I've been pulling together a lot of their research papers and documentation related to the POWER architecture for a while, and I've noticed some pretty big missing pieces. For example: in order to best preserve the pdfs, while increasing their usefulness and reducing their maintenance burden (disk space), I set out to convert them to PDF/A. That means embedded fonts, which is fine - I'd ideally digitize as much of the text as possible (instead of throwing a hidden OCR layer on and calling it a day), but... the font is nowhere to be found. IBM had an extremely popular font, Press Roman, that is very common in journals and books published from the 60s to the late 80s, and as far as I can tell - it may now exist only in whatever remains of two printer font cartridge SKUs. I won't even get started on the problem regarding their plotting software.
It reminds me of how NASA simply lost so much of the original media, and what he have today is either purely accidental or the result of a considerable amount of work done by volunteer restoration groups. We really need to get a grip on this problem now, and it would be nice if IBM would actually help out with that - instead of leaving it to volunteers.
IBM has a very long history of diligent notetaking, maintaining several journals dedicated to internal development stretching back into the 50's. While I certainly haven't read everything, I've read enough to feel comfortable saying that the quality has had a sharp decline as transparent marketing replaced useful engineering. I suspect I'm not the only one who felt that way, because they recently shuttered those operations and tragically handed everything (from what I can tell) over to the paywalls. I generally don't feel much one way or the other about corporations, but seeing IBM decay like this genuinely makes me gloomy.
Not in python, maybe. Smartbombs have been doing the necessary image processing with much less processing power, on much less capable sensors, for a long time.
Dunno about guided bombs, but cruise missiles have been using some pretty fascinating techniques to navigate before GPS was a thing. It's probably hard to find CPU specs, because they're defense technology.
It is surprisingly easy to find, as that stuff is pretty thoroughly covered in academic/industry journals. The only hinderance to access is a credit card number for the paywall.
The helicopter project is somewhere said to cost $80 million.
Would be interesting to know cost allocation for that Sony/Samsung chip. Project manager and scientists/engineers could have listed design challenges for industry player like TSMC/Apple go beat with an offering, a short run of 20 chips specifically for this helicopter.
What. I dont understand how you can make this claim. Guided bombs are NOT using CV with optical cameras. They use lasers, GPS, and other non "fancy" techniques.
I just don't get in what world you think military munitions are using CV for targeting bombs.
So I guess you've never heard of the AGM-62 Walleye, or anything else that came out of China Lake. Before you try backpedaling with some silly nonsense about how gating isn't real CV, maybe do a quick search through the journals that cover this stuff: aiaa would be a good start. Another path would be in relation to counter-counter-measures, and ground noise rejection for air-to-ground radar guided munitions. That stuff was deployed regularly all the way back to Vietnam.
The other poster mentions analog techniques used in contrast-tracking TV-guided munitions like the Walleye, but digital "CV-like" image/contour matching methods were used on the original Tomahawk cruise missile and the Pershing 2 missile to provide terrain-matching navigation and target guidance. GPS was neither sufficiently complete or accurate for strategic weapons in the late 1970s/early 1980s.
In more modern weapons, imaging IR sensors are well-established for terminal guidance on missiles like LRASM, JASSM, or NSM to distinguish targets from clutter and identify specific target features (specific parts of a ship, for example). Of course "traditional" "IR-homing" SAMs and AAMs now use imaging sensors (often with multiple modes like IR+UV) to distinguish between the target and decoys/jammers. Even your basic shoulder-fired anti-tank missile like Javelin requires some amount of CV to identify and track a moving target.
aka edge detection :) I don't remember if it was the Sidewinder or Walleye that eventually dropped in a CCD (or both), but I know that the Maverick (which is technically older than Walleye) got along without a CCD until the GWOT - when it finally upgraded. The Javelin actually beat Maverick in that regard, having a 64x64 sensor 10 years earlier - able to handle scaling and perspective change for the 2-d designated target pattern.
Or AMD could simply knock off the binary blobs. The DRM excuse has always been weak, because it is a tiny fraction of the total firmware blob - and it'd be easy to make it so that the legally hobbled hardware decoder simply errors out in cases where the end user chooses not to load the DRM blob. Boom, no more dependence on AMD interns. I've been following their commit logs pretty closely for a year now, and I frequently see some amazing accidental admissions about the left hand (software team) not knowing what the right hand (hardware team) is doing. During one very frustrating series of patches it was difficult resisting the impulse to say "Go get your dad."