there is a good chance that you will benefit from this step. It's easier for the manufacturer to make the manual available globally than restricting access to only Californians.
EDIT: Is it even possible to restrict access to people from a US state? Even if I wanted to do that I would have no idea how.
i have some nanopi neo(1)s and they're rock solid - my rule of thumb with arm boards is that support is good enough if it has an armbian _debian_ (vs just armbian ubuntu) release
Cheers I might look at getting one then, as I had problems with my Pi's network connectivity being slow compared to a laptop's using the same ethernet over power line adapter.
no idea about PoE on either board but if i/o is a concern you might want to take a look at the $25 rock64 with gbe and usb3 - i/o on my neo1 maxes out somewhere just below 20mb/s but my rock64 has no problem doing 80mb/s with some ancient spinny disks and some armbian devs have done ~200mb/s with ssds iirc
os support on the rock64 is not quite there yet however but given the features@pricepoint i expect it to catch up
waste of (taxpayer) money using the (overpriced) rpi's gimped (no aes, usb3, gbe) soc
the rpi value prop is the community support but that would mean that this cluster is running a 32bit os - so what really is the point of using these instead of something smaller and cheaper or same size and more powerful for the same money
Good that thing national laboratories and research in general don’t have to show profitability then. Because otherwise we’d get nothing done and funded due to the likes of people like you.
i don't even know where to start understanding this remark
i am questioning why they are using immensely popular and commensurately overpriced but yet woefully underspecced components instead of something better and/or cheaper
is it just because 'raspberry pi' makes for a hot title in a press release?
Your question might be answered if you offered an example (or two) of something more capable and/or cheaper. I'm really interested in what you think would be cheaper.
Keep in mind the goal seems to be to build something with a high node-count, rather than just core count so small size is important.
Because someone who understood the available resources and the actual user needs performed an analysis and determined that this would be sufficient. Commodity hardware eliminates the need to spend a year or two engineering a custom board, dealing with vendor BSPs, etc. It also eliminates the FTE that would be required to support all this.
They could also have chosen to use a reference design from another manufacturer but it's basically like using RPi but 100x more expensive. There's a good case to be made that this is the most cost-effective design for what they're trying to accomplish.
They could also just have bought 12 Epyc 64 core Servers to get equivalent performance assuming they are exclusively compute bound. If they are IO bound even just 3 nodes with 100gbit NICs would be enough to outperform the Raspberry Pi cluster.
If they really wanted to develop a competitive cluster they'd need at least a SoC with 4 A72 cores, 10gbit NIC, 8GB RAM, and a local 128GB SSD.
Edit: I misread the article it's not a cluster with 3000 raspberry pies. It's just 3000 cores. 3 Epyc Nodes are faster than this cluster.
It's not necessary about speed, it's about how to write code that runs efficiently, concurrently and/or in parallel on that many nodes.
IIRC, that's kinda how ethernet came to be, ther were working on the computing world of the future at xerox PARK, they had to create clusters to emulate the cpu power that would be available in the coming years. Looking at the current trend, from phones to servers, they go from two cores to I-don't-have-enough-fingers-except-if-I-count-in-binary cores. A 3000 cores raspberypi cluster can be an emulation of the computing environemnt of tomorrow, not in term of raw power, but in term of distributed computing, and lead to unforseen invention as the ubiquitous ethernet.
Please do tell us about these better specced, better supported off-the-shelf commodity components!
Unless they're planning to fab 100 of these, chances are high that the retail margin on a Pi still leads to one of the cheapest BOM for a one-off project like this
you do understand that 'better supported' in the case of the raspberrypi means that you are running a 32bit os? that's all they officially support - and have no plans to change
do you really think something or better and cheaper is impossible? the rpi is _the most overpriced sbc on the market_ particularly if you're not using it for raspbian (32bit only) and the module ecosystem
nanopi neo2 and rock64 are better and cheaper alternatives
Ugh, it has one of those godforsaken Allwinner chips on it. good luck getting 750 of those working at once. The RPis are more expensive up front, but they don't require the researchers to learn Mandarin to get tech support or read the docs, on the outside chance such things are even available.
Engineering teams rarely trifle over price of parts when reliability and confidence are greater concern. Sure, the neo2 is cheaper, and the rock64 is more powerful, but I haven't heard of either. What is the probability of encountering a weird not-before-seen bug on these platforms, vs Pi? Now what's the cost of dealing with that bug, including the risk-cost of having to scrap the entire platform because it's unscalable? It's probably at least $20x750 = $15k.
Pi has at least 10x more community eyes crawling over the whole system than the next three combined. You can't put a price tag on that.
Also consider the ecosystem for these boards. Do you want your researchers dealing with the non-upstreamed support around Allwinner chips? Additionally consider time to solution, this machine is based on BitScope's Blade: http://bitscope.com/product/blade/.
32-bit vs 64-bit is largely inconsequential when the nodes only have 1-2GB of RAM. The ecosystem I'm referring to means you can run a vanilla Linux on a RPi with no extra work. You can google a problem and have a reasonable chance of finding a solution around the RPi and so on.
From what I understand you can see a decent improvement in speed by going 64bit on ARM because they used the crossover to dump a fair bit of legacy braindamage in the instruction set that was preventing them from making certain optimizations.
On the other hand, I look at the 3000 core figure and think that it's roughly on par with high end GPUs. The clock rates aren't terribly different either. The range of applications where this beats out GPU solutions is probably fairly narrow, especially given the terrible IO bottleneck on the RPis.
For comparison, a $7,500 TITAN X has 3072 CUDA cores clocked at 1Ghz. This cluster has 3,000 CPU cores clocked at 1.2Ghz. On the TITAN card all of those cores share the same 12GB of memory with 336.5GB/s of memory bandwidth. On the cluster every 4 cores shares 1GB of memory with (I think) 3.6GB/s bandwidth. Of course communication outside of those 4 cores is restricted to 0.0125GB/s at best.
for one, why are _researchers_ using largely obsolete technology; for another, many high performance computing tasks perform significantly faster on 64bit (e.g., lmdb)
Performance isn't actually the objective of the Pi cluster; the people using it have a real supercomputer next door. It's a testbed so they can validate programs before transferring them to the expensive supercomputer.
I would imagine going from a 10-node to 100-node system is more overall complicated than going from 32 to 64. Sure the instructions change, but that should basically be all abstracted away by the toolchain. However job management, allocation, data logistics, queues, cache invalidation, bottlenecks, etc, are all key issues that compound non-linearly with scale.
Chemists still use thin-layer chromatography, a technique 100 years old, day-in-and-out, in every lab in the world, even when HPLC and NMR exist. Why? It's cheap, fast, and works well enough.
Having said that I agree the Pi is overdue for a refresh; it needs gigabit ethernet and usb 3 at a minimum but faster interfaces would be great. I think people execute these projects because the Pi is a great reference architecture that can be bought at scale and that has been proven by the large user community.
I'd love it if the storage and Ethernet weren't hanging off of the same USB. Or on USB at all. The USB is a perennial bottleneck on the RPi.
There are a lot of boards that implement it correctly in hardware, but then make a hash out of the driver support with binary blob drivers that are fixed to a particular kernel version and crash from time to time.
It's probably not technically feasible currently, but I'd love to see a board where all of the hardware is open (even the 3D acceleration!) and already mainlined into the kernel so you could just install whatever distro you want on it and available at a price point under $50. I'm really tired of "you need to dump this proprietary binary blob into the graphics chip before the rest of the board can even start to boot." Why is it taking so long to come up with a universal boot solution, something that could be integrated into GRUB so you don't need to program magic offsets into the bootloader to make it work? PC hardware manufacturers more or less solved this problem 30 years ago, and I'm not taking "but the hardware is so specialized that you can't do it" as an answer anymore. The SBC world is absolutely crammed with stovepipes for no good reason.
I may be mistaken, but didn't they say that there wouldn't be more releases (seems insane to me, but apparently their goal is to drive education and where the board is, I guess, is enough)? Hoping that isn't the case.
I was just explained the other day on Hacker News how CPU microcode gets delivered with the Kernel. That it gets installed automatically on every boot. Why is it a separate package (which it turns out I don't have).
Well it is a separate package because it is fundamentally independent of the kernel. For example, your Debian system might want to use a different kernel like GNU's Hurd, kFreeBSD, or NetBSD, so by keeping those packages separate, they can easily be used interchangeably. Also if you are on an AMD system, you wouldn't want Intel microcode, but you might still want the same Linux kernel.
Also another big issue issue is the Intel microcode is proprietary, so separating it from the kernel means that user could selectively pick and choose if wants to have a totally free system, which would mean not loading microcode updates with a libre kernel. This is done for instance with Parabola and Trisquel distros, which is needed to obtain FSF's totally free certification.
Does this mean that my Ubuntu installation runs stock Intel microcode? It means I actually have to know about this, and then go out of my way and know how to install it? Or should it be installed by default on Ubuntu? Because the package called `intel-microcode` isn't installed for me now.
Debian removes binary blobs from the kernel, putting them into separate microcode and firmware packages in non-free instead.
Intel places restrictions against reverse engineering the microcode, as well as it not being in the prefered original source. Both of these violate the Debian Free Software Guidelines, and thus it can only be in non-free at best.
I thought you need the microcode to boot. How can you function in a non-free OS? And I guess non-free people will have these bugs now?
Also, I have basic Ubuntu. I'm running `apt list --installed` and I don't have the microcode package in the list. Does that command not list dependencies, or am I just missing it?
The CPU has a built-in baseline microcode that it uses on every boot (runtime microcode updates are not persistent), OSes don't need to necessarily update it, but they can.
Intellectually and academically dishonest hitpiece by peddlers of a competing cryptocoin.
That this subset of transactions is not safe is not news, nor is it even original research - it was covered in research more than 2 years ago by The Monero Project itself - and is something the project has addressed since and is working to further improve even beyond the recommendations of this paper.
Andrew Miller does not hide his ties to Zcash; I believe none of the other authors are associated with Zcash. I do not think he needs to recuse himself from academic study of competing currencies, just because he has loose ties to Zcash.
Also, the authors do not hide the fact that the vulnerability is not new. Most science is incremental; I haven't seen any evidence of 'academic dishonesty'.
>The Zcash Foundation will now be endowed with 273,000 zcash, worth more than $13m at press time. As part of the network’s rules, 10% of the cryptocurrency’s mining rewards are automatically awarded to stakeholders.
>The four-person board of directors includes chair and president Andrew Miller, associate director of the Initiative for Cryptocurrencies and Contracts (IC3), and Matthew Green, assistant professor of computer science at Johns Hopkins University.
This paper is akin to me publishing a paper noting how insecure Windows for Workgroups 3.11 is, providing advice for securing it, and then Tweeting out that that the paper found that "Windows is trivially insecure out the box". Sure, the paper would technically be correct, and my Tweet might even technically be correct, but it would be irrelevant since nobody uses Windows for Workgroups 3.11.
Nobody CAN use mixin-0 transactions in Monero, because they've been banned since a March 2016 hard fork that took over a year for them to plan and roll out. Nobody can be affected by down-chain use of those mixin-0 transactions because RingCT doesn't allow you to create ring sigs form them, which was added in the December 2016 hard fork.
It's no wonder, then, that the paper, and accompanying website, only go up to the end of 2016 - they have no valid data from the beginning of 2017 onwards, and have published the paper seemingly only as a 'hit piece'.
This paper is an empirical analysis. The Monero reports introduced a theoretical attack with conditions, e.g. “a critical loss in untraceability across the whole network if parameters are poorly chosen and if an attacker owns a sufficient percentage of the network.”
The news is that our research confirms, for the first time, that this is actually the case, and it affects actual transactions.
The core of this paper's claim seems to be that 0-mixin transactions leave user's exposed, however Monero has since prohibited these types of transactions. So yes, these types of transactions going backwards are exposed, but moving forward they will not be.
This appears to be the Monero's team main response. Am I missing any other substantive arguments from the paper?
> Am I missing any other substantive arguments from the paper?
The second half of the paper, "Linking with temporal analysis". If you read the second half of the introduction, you will find that the primary technique they use for tracing 80% of transactions is found in the current version.
The sloppiness of this code is really shocking, "when the Monero client chooses mixins, it does not take into account whether the potential mixins have already been spent."
> The sloppiness of this code is really shocking, "when the Monero client chooses mixins, it does not take into account whether the potential mixins have already been spent."
That's because RingCT removed the ability to create a ring signature with those outputs, so adding a complex whitelist / blacklist mechanism would have been a massive waste of time.
> The sloppiness of this code is really shocking, "when the Monero client chooses mixins, it does not take into account whether the potential mixins have already been spent."
I'm not thoroughly familiar with monero's internals, so someone please correct me if I'm wrong, but I thought it was well known that this was a deliberate design decision. Previously spent amounts don't actually run a risk of being double spent as they're only used anonymization purposes, as far as I understand. So why is this is considered "sloppy"?
It was a deliberate design decision as the issue was mitigated in a different manner starting in early 2016 (and introducing that check wouldn't be very effective anyway for other reasons).
The results of the mitigation are shown in the paper as Figure 5. The success of the techniques in the paper decline rapidly over the course of 2016 and would effectively reach zero if the dataset were extended (this is noted in the text when it states that RingCT transactions are immune, although even without RingCT it would still effectively reach zero)
The technique in the second half of the paper is not able to trace any transactions at all, as I explained in more detail in another reply. It identifies a partial weakness in the ring signatures but it is not capable of breaking them.
It is not true that this result was previously known. Please see the section “Comparison with related work on Monero linkability.” in the paper (https://monerolink.com), which starts "We note that earlier reports from Monero Research Labs(MRL-0001 [10] and MRL-0004 [7]) have previously discussed concerns about such deduction, called a “chain-reaction,” based on similar insights as described above. However, our results paint a strikingly different picture than these." and then goes on to show those striking differences in the new results and the previous knowledge.
As a little cherry on top, the CPU they sourced is also crippled with no onchip crypto. Even the half-price Pine64 has a CPU with crypto instruction sets.
I've search high and low and absolutely can't get my hands on one.