Hacker Newsnew | past | comments | ask | show | jobs | submit | quarterwave's commentslogin

This is in fact a profound question.

The physics of electricity propagation in a powerline circuit is fundamentally the same as the propagation of FM radio waves, or even the beam from a flashlight. All of these examples involve electromagnetic energy propagating at the speed of light. So why do we need wires for powerlines, but not for propagating radio signals or light beams?

The principle is that light (in a vacuum at least) travels at a constant but non-infinite velocity, c. Hence, the electrical wiggle received at an observer's location now i.e; (x,t), has been caused by some earlier wiggle conducted by the source (x',t') such that the t-t'=(x-x')/c. We should expect that the effect of this 'time delay' is more profound if the source is wiggling faster in time.

A more specific way to state this is that the electromagnetic power radiated into free space by a dipole increases as the square of the dipole moment and the fourth power of the frequency. Now compare a powerline (60 cycles/sec), with FM radio (100 Million cycles/sec), and with the light from a flashlight (500 Trillion cycles/sec). That's why even atomic dipoles can produce intense visible light, while it would take a very very large dipole to radiate a similar intensity at 60 cycles.

Feynman Lectures Vol.II is absolutely the best reference to learn this stuff.

That is not to say that a bird needs to make contact with two wires on a utility line, in order to suffer harm. If the line voltage is high enough (e.g; 110,000 Volts, as in high-tension power transmission), an electrical corona would form around the line from electrostatic effects. The corona is actually ionized air indicating the high electrostatic field strengths in the region, it can emit a bluish glow and growl at 60 cycles. Birds can no doubt sense this corona & stay away.


> Birds can no doubt sense this corona

So can humans, it's a crackling sound.


Erlang = Owls (message passing)


Crystalline III-V channels can be grown epitaxially on silicon substrates, with buffer layers to grade the strain due to crystal lattice mismatch. With silicon wafers heading to 450mm diameter the economics would argue against native III-V substrates.

One advantage of native III-V substrates is they are semi-insulating (very high resistivity) so there is no need for transistor isolation wells. However, insulated substrates could be obtained on silicon by means of wafer bonding with an intermediate dielectric layer.


I am reading this book 'The Dollar Trap: How the US dollar tightened its grip on global finance' [1], where economist Eswar Prasad explains how money flows into the US from around the world even when troubles originate in U.S financial markets.

[1] http://thedollartrap.com/


Interesting work!

At first glance it would seem that Erlang or Akka provide a proven infrastructure for building such a multi-agent system+. Yet, a key construct is the notion of time and its passage.

(+Which is subtly different from an agent-based model, for simplicity let's assume they're both the same thing.)

There is a profound observation by Rich Hickey in his talk [1] 'Are we there yet? - A deconstruction of object-oriented time' (the baseball field slide): "Perception is massively parallel and requires no co-ordination - This is not message passing!"

For example if the said ballpark suffered an earthquake, would the Matrix need to pass messages to each agent? Would a spectator continue walking toward the hot-dog stand just because the message hasn't shown up yet?

From this perspective, the notion of 'container clock' presented here can be useful. Question is whether we 'get to stop the world when we want to look around' (Hickey), or not.

[1] http://wiki.jvmlangsummit.com/images/a/ab/HickeyJVMSummit200...


I have home back-up power based on an inverter charging a lead-acid battery (located in a sheltered area outside the home), which costs about $100/kWh. Usage is about 1-2 hours discharge per day. No matter how well serviced, I've found these batteries don't last beyond four years. Hence I'd pay even $400/kWh for a well-engineered deep-cycle battery that is safe, maintenance-free, and will last at least 10 years. Excluding balance of system, even.


> No matter how well serviced, I've found these batteries don't last beyond four years.

Wow, that's really short, especially with such a shallow discharge pattern.


Not necessarily! We don't know how deep the battery is being discharged. But we can make a guess. 4 * 365 = 1400 cycles.

http://www.mpoweruk.com/images/dod.gif

According to that chart (which is an approximation of course) 1400 cycles corresponds to about a 40% depth of discharge. Which isn't terribly shallow.

The other variable is the discharge rate, and the higher it is relative to battery capacity the worse the efficiency and also the propensity to fail early. A lot of times doubling the pack size can extend the pack life by more than two because the increased efficiency (the internal resistance is lower) reduces the depth of discharge by more than half.

http://batteryuniversity.com/_img/content/crate1.jpg

Of course it feels totally ridiculous to only use 20% of the nameplate capacity of the system, and much worse than using 40% which you can sort-of rationalize as "half" but if it decreases your dollars per joule, it might be worth it.

EDIT:

I should also mention that if you're constantly charging and discharging and you don't mind a little energy loss you should look at nickel-iron batteries. They're not terribly efficient nor are they cheap in absolute terms but they're basically bulletproof.

http://en.wikipedia.org/wiki/Nickel%E2%80%93iron_battery

http://ironedison.com/


I've wondered for years why people always gravitate towards Li-Ion cells when talking about a home storage battery. Li-Ion's advantages are far, far less important for a dwelling. Weight doesn't matter... you won't be moving them. Size doesn't matter nearly as much as it does in a vehicle... losing a few inches off an entire wall in a garage won't really be an issue for most people. Edison cells are incredibly durable and much, much more environmentally friendly than any other battery tech I know of. The lifespan is nothing short of incredible too... you won't need to change them out.

Thank you very much for the 2nd link. I was unaware any company was still manufacturing them. The last time I looked, the last company I could find that made them stopped a few years prior. I'm glad someone is making them still/again and marketing for an appropriate use.

The one odd thing is the price... for something as low-tech (relatively speaking) as an Edison cell, I'd expect them to be much cheaper. Must be the lack of competition.


They may seem to be low tech but the electrodes are works of art and manufacturing them is a lot more expensive than a lead-acid battery of comparable capacity. They charge slower too, but they'll stand up to abuse better than every other rechargeable battery tech. I looked at them for a long time before settling on regular lead-acid, cost and finding an inverter that would charge these properly were the major factors.


check out these _types_ of batteries.

http://www.sbsbattery.com/products-services/by-product/batte...

no recommendation on the supplier, but funny thing, the power companies and telcos have already figured this out for reliable DC power :)


Thanks, good point about telcos.


Do we have any reason to believe lithium can achieve that? How long are the Tesla batteries supposed to last...?


This is an excellent 'road map' to the two key theorems of information theory.

The focus on decoding complexity in the noisy coding theorem is particularly welcome. A separate article amplifying just this aspect (error exponent, Pareto complexity, etc) would be welcome.


Any chance of a resurgence in Lisp machines? Especially in view of the changes in CPU architecture due to semiconductor scaling challenges.


There is at least one current project - Mezzano -that boots a Common Lisp and some apps on bare metal.

But the Lisp Machines were more than Lisp on hw. They were about the software, the shell, the IDE. Some work was started more than 15 years ago to revive the CLIM API that made this possible and the original authors still work on it now. I'm really glad about this:

https://github.com/froggey/Mezzano

https://github.com/robert-strandh/McCLIM


Almost certainly not, see the most recent post on this subject https://news.ycombinator.com/item?id=9013669


Thanks for this reference.


What's stopping you from making it happen?

Developing your own special-purpose hardware is easier than ever these days. There are numerous open-source off-the-shelf FGPAs that are mature and fast.


One of the biggest issues is that a FPGA design running at 50-100 MHz (compare to the contemporaneous Cray-1 80MHz), with little memory that can be used as cache, gets blown out of the water by a +3GHz CPU with megabytes of on die cache. In terms of just being a "Lisp Machine", it only makes sense as retro-computing. Even a CADR, 3600 etc. simulator running on a fast x86-64 CPU would be (a lot) faster.

See more in my longer comment in this subthread.


Aren't we loosing the focus by looking at the Lisp Machines only from "HW" point of view? They were "ported" on Alpha, and I can run them today on a x86-64 VM.

I think we're overwhelmed by nostalgia and this stops us from looking at what's important: software. We are missing the software pieces that made the Lisp Machines. We don't have those and this is more important than not having a CL CPU.

I would hate to have a Lisp Machine made with today's custom hardware and all the C/C++/Java/Python guys come and ask: what was the fuss was all about? Where's that IDE from 25 years ago you so proudly preached?


Well, we do have a rather early fully legal 1981 copy of the system, and one or more illicit but no one seems to care copies of much later Symbolics systems (don't know if those included source, though). So that in part an issue of software archaeology, when we can also ask most if not all of the people involved about details.

And I fully agree the focus should be on the software, as I hope I made clear in other comments in this topic.



What do they cost in single unit quantities, or rather, what does a development board that I can stick GiBs of hopefully ECC DRAM cost? And the development tools?

I know I can do this with small scale ones, including some of the tools, on sub-$100/$200 boards with not a lot of memory (the research lowRISC has prompted me to do has been fascinating). If the answer to the above is 6 figures, the intersection of those who can afford it and those who are inclined to do it would be small.

Maybe not 0, then again, at what speed could you get a synchronous microcoded CPU working? Aren't we still talking way way below +3GHz, like the 50-100MHz I just cited? Is 200MHz possible?

I've read of one that uses magic (and no doubt $$$ in tools) to translate your sync design into an faster async one in the middle of their magic FPGAs, but even then I don't recall the potential speed breaking past a GHz if that. Although that was a while ago, 1-2 Moore's Law doublings ^_^.

Flip side, are the FPGA companies going to open up their kimonos to allow a lot more people to design in their increasingly inexpensive (Moore's Law) parts?


I'm not saying it is economically competitive. (It is possible pay over $25k for a really high end FPGA) And if you are just synthesizing a general purpose synchronous CPU you definitely not going to get a lot of bang-for-the-buck, because you are going against the grain of what an FPGA can provide. In that instance you're just vetting your design until you convert it over to a mask-programmable "equivalent", or do a full-custom design. The interesting things about an FPGA would be to use its inherent parallelism, fine grain programmability, and the reprogramability to run circles around something constrained by a von Neumann bottleneck in the cache hierarchy.

As to clock speeds, here's part of the abstract to a white paper that might interest you:

"A clock rate higher than 500 MHz can be supported on a mid-speed grade 7 series device with almost 100% of the logic slices, more than 90% of the DSP48 slices, and 70% of the block RAMs utilized. This requires the designer to follow some rather simple design rules that cover both algorithmic and implementation aspects. These rules are reviewed in the paper."

http://www.xilinx.com/support/documentation/white_papers/wp4...

...but clock speed isn't necessarily a super interesting factor if your data bus is 2048 bits wide, with a pipline 100 stages deep, comparing to say 64 bits wide and 10 stages deep on a CPU. Again, this is not to say that anyone should try implementing a Lisp machine on an FPGA to try to take market share away from Intel.


I think emacs is as close we are going to get to a Lisp Machine today.


Emacs just lacks the whole operating system written in Lisp, a capable Lisp implementation, the GUI library, and a whole bunch of other things...


Emacs is a very tiny piece of the whole experience of using a Lisp Machine.


I've been thinking hard about this lately, and the first question for me is "What would a 21st Century Lisp Machine mean?"

Lisp Machines were created in part due to the desire to get the most performance possible back in the days when CPUs were made out of discrete low and medium scale integration TTL (there were also ECL hot-rods, but their much greater costs across the board starting with design limited them to proven concepts, like mainframes of proven value, supercomputers, and the Xerox Dorado, after the Alto etc. had proven the worth of the concept).

Everyone was limited: maximum logic speeds were pretty low, you could try to avoid using microcoded synchronous designs, but e.g. Honeywell proved that to be a terrible idea, as noted elsewhere memory was very dear. E.g. the Lisp Machine was conceived not long after Intel shipped the first generally available DRAM chip, a whopping 1,024 bits (which was used along with the first model of the PDP-11 to provide graphics terminals to the MIT-AI PDP-10), etc. etc.

So there was a lot to be said for making a custom TTL CPU optimized for Lisp. And only that, initially: to provide some perspective, the three major improvements of LMI's LAMBDA CPU over the CADR were using Fairchild's FAST family of high speed TTL, stealing one bit from the 8 bits dedicated to tags to double the address space (no doubt a hack enabled by it having a 2 space copying GC), and adding a neat TRW 16 bit integer multiply chip.

The game radically changed when you could fit all of a CPU on a single silicon die. And for a whole bunch of well discussed reasons, to which I would add Symbolics being very badly managed, and LMI killed off by dirty Canadian politics, there was no RISC based Lisp processor, Lisp Machines didn't make the transition to that era. And now CPUs are so fast, so wide, have so much cache ... e.g. more L3 cache than a Lisp Machine of old was likely to have in DRAM, the hardware case isn't compelling. Although I'm following the lowRISC project because they propose to add 2 tag bits to the RISC-V architecture.

So, we're really talking about software, and what was the Lisp Machine in that respect. Well, us partisans of it thought it was the highest leveraged software development platform in existence, akin to supercomputers for leveraging scientists (another field that's changed radically, in part due to technology, in part due to geopolitics changing for the better).

For now, I'll finish this overly long comment by asking if a modern, productive programmer could be so without using a web browser along with the stuff we think of as software development tools. I.e., what would/should the scope of a 21st Century Lisp Machine be?


Thanks for the detailed perspective.

My limited & roseate view of a 21st century Lisp machine is based on an old theme - a massively parallel computing system using bespoke silicon logic blocks.

As you have noted below, not only are the cache sizes in a modern CPU monstrous, there's also the compilers optimized for these caches, instructions, branch prediction units, etc. No point in ending up with a chip that is much slower than an equivalent one running on a specially-designed virtual machine, which is itself much slower than MPI.

Dreaming on, such a Lisp machine would need a vast collaborative academic effort with substantially new IP design, in say the 32nm silicon process node. That's the most advanced node where lithography is still (somewhat) manageable for custom IP design.


Well, there's the first Connection Machine architecture, very roughly contemporaneous with Lisp Machines (I had to regretfully tell my friend Danny Hillis that LMI wouldn't be able to provide Lisp Machines for Thinking Machines Corporation in time (which had to be formed because the project needed 1-2 analog engineers, which MIT was structurally unable to pay, no one gets paid more than a professor). He was really, legitimately pissed off by what Symbolics did with Macsyma, a sleazy licensing deal to keep it out of everyone else's hands (and they tried to get everyone in the world who'd gotten a copy of it to relinquish it). Later neglected, even when it became the Symbolics cash cow.)

Anyway, if you're not talking ccNUMA, the limitations of which has got me looking hard at Barrelfish (http://www.barrelfish.org/), e.g. if you're talking stuff in the land of MPI, again it's going to be very hard to beat commodity CPUs.

Although in that dreaming, look at lowRISC: http://www.lowrisc.org/ looking at things now, they propose taping out production silicon as soon as 2016, and say 48 and 28nm processes look good. From the site:

What level of performance will it have?

To run Linux "well". The clock rate achieved will depend on the technology node and particular process selected. As a rough guide we would expect ~500-1GHz at 40nm and ~1.0-1.5GHz at 28nm.

Is volume fabrication feasible?

Yes. There are a number of routes open to us. Early production runs are likely to be done in batches of ~25 wafers. This would yield around 100-200K good chips per batch. We expect to produce packaged chips for less than $10 each.

And with a little quality time with Google, the numbers look good. Ignoring the minor detail of NRE like making masks, a single and very big wafer really doesn't cost all that much, like quite a bit less than $10K.

And we now have tools to organize these sorts of efforts, e.g. crowdsourcing. But it's not trivial, e.g. one of the things that makes this messy is modern chips have DRAM controllers, and that gets you heavily into analog land. But it's now conceivable, which hasn't been true for a very long time, say starting somewhere in the range between when the 68000 and 386 shipped in the '80s.


I've been wondering about it since like every other programmer I hit that time when I'm really looking at programming languages and VMs (in the "what would I design" sense). Looking to Lisp Machines to see what they were about leads me to the question: would concentrating on hardware memory management / garbage collection be a starting point to answer your question?


One indication is that Azul, after 3 generations of ccNUMA systems with zillions of custom chips and a memory infrastructure that gives each one "mediocre" access speed to all the system's memory for running Java with gonzo GC ("generic" 64 bit RISC chips with things like a custom read barrier instruction), has given up and is doing their thing on x86-64 systems with their Zing product, albeit at least initially with tricks like kernel extensions to do bulk MMU operations before a single TLB invalidation. Look up their papers on the Pauseless and C4 GCs. The former was done in time to make it into the 2nd edition of sorts of the book on GC: http://www.amazon.com/Garbage-Collection-Handbook-Management...

Or to put it another way, without exhausting my bank account I could build from parts I can purchase today on Newegg a many CPUs 3/4ths TiB DRAM Supermicro system. Supermicro has standard boards with more memory, and has a monster you can only buy complete that'll hold 4 CPU chips and up to 6 TiB DRAM on daughter boards; I think based on some Googling that has a starting price of less than $35K.

Moore's Law is our friend. But its economics is not the friend of custom CPUs in competition with commodity ones.


i used to be a college teacher for several years & my sympathies are with the author.

Advice for those considering this path in science/tech - learn to write code on the side. Pick something mainstream that will be around for a while, and which you can tap for a sideline. Develop deep expertise, spend as much time continually educating yourself as you do for others.


Contract programming parallels much of what he says about contract teaching, with the added difficulty of having to track down and find projects to work on.

"Learn to code in order to generate side income" isn't as simple as it sounds. I wish people would stop perpetuating this myth.


Is there an Erlang shell to evaluate expressions?

There seems to be a sub-window to the right of the module editor, but I'm unable to place the cursor there.

Tried in both Firefox 35 and Chrome 39.


No, there is no shell. Only main() function can be evaluated. You may be interested in http://tryerlang.org/, but it is much more restricted.

I don't understand about which window you are talking about. May you make a screenshot?


I think he might be referring to the 80 column vertical line


That's correct, thanks for pointing it out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: