So the secret sauce was... interesting! Unlike any other company I have seen.
Each game team (of which there were 4 who I was there) was isolated from the other teams, only senior management and IT had access to the other team's floors/buildings. If someone from another team needed to visit for some reason we were usually instructed to turn off monitors. There was very little sharing of code and core graphics/gameplay features were fiercely protected by each team. The competition was not other games companies (with the exception of Nintendo) but the team in the next building.
There was almost no recruiting from other games studios, almost everyone was hired out of college (or recently so).
The management (Tim and Chris) generally let the teams do what they thought best and didn't push hard on things like deadlines. Release dates existed but were argued/negotiated with Nintendo when things slipped.
Employees were expected to be in the building at 9am (there was a bonus for punctuality, which I dint think I ever received and was reprimanded about several times). Lunch and dinner were provided, the studio was literally on a farm so there were no lunchtime distractions, and long hours were expected, although not mandated. 70+ hour weeks were not unusual.
Teams got a small bonus for shipping a game but then got a split of game sales (after some 'production cost' was subtracted). The split increased as Rareware negotiated higher payments from Nintendo, for later N64 games the team received something insane like $1 per cartridge, which on a 10-16 person team with millions of sales was crazy money. People were paying cash for houses and Porsches.
Unfortunately while the generous bonuses drove the work culture, it also caused a large amount of resentment. People on shelved projects, projects on low royalty rates (eg Goldeneye), or poor selling products put in the same or more work but did not anywhere near the same fiscal rewards. It was a side effect of Tim and Chris being extremely generous and wanting to do the right thing by the staff but it did cause a number of departures (albeit mostly from the Goldeneye team), I think this has been discussed publicly by many of those folks.
Namecheap recently suspended an account because of a tweet thread speculating that a domain was maybe related to abusive behavior. Turns out it wasn't at all. It was so arbitrary even the people who were speculating were surprised by that decision.
Even Google isn't sk arbitrary as to base their ban decisions on random Twitter discussion
The original Lisp badge (or rather, SCHEME badge):
Design of LISP-Based Processors
or, SCHEME: A Dielectric LISP
or, Finite Memories Considered Harmful
or, LAMBDA: The Ultimate Opcode,
by Guy Lewis Steele Jr. and Gerald Jay Sussman,
(about their hardware project for Lynn Conway's groundbreaking 1978 MIT VLSI System Design Course) (1979) [pdf] (dspace.mit.edu)
The amount of complication can be hundreds of times more than the
complexity, maybe thousands of times more. This is why appealing to
personal computing is, I think, a good ploy in a talk like this because
surely we don't think there's 120 million lines of code—of *content* in
Microsoft's Windows — surely not — or in Microsoft Office. It's just
incomprehensible.
And just speaking from the perspective of Xerox Parc where we had to do this
the first time with a much smaller group — and, it's true there's more stuff
today — but back then, we were able to do the operating system, the
programming language, the application, and the user interface in about ten
thousand lines of code.
Now, it's true that we were able to build our own computers. That makes a
huge difference, because we didn't have to do the kind of optimization that
people do today because we've got things back-asswards today. We let Intel
make processors that may or may not be good for anything, and then the
programmer's job is to make Intel look good by making code that will
actually somehow run on it. And if you think about that, it couldn't be
stupider. It's completely backwards. What you really want to do is to
define your software system *first* — define it in the way that makes it the
most runnable, most comprehensible — and then you want be able to build
whatever hardware is needed, and build it in a timely fashion to run that
software.
And of course that's possible today with FPGA's; it was possible in the 70's
at Xerox Parc with microcode. The problem in between is, when we were doing
this stuff at Parc, we went to Intel and Motorola and pleading with them to
put forms of microcode into the chips to allow customization and function
for the different kinds of languages that were going to have to run on the
chips, and they said, What do you mean? What are you talking about?
Because it never occurred to them. It still hasn't.
The Great Quux's Lisp Microprocessor is the big one on the left of the second image, and you can see his name "(C) 1978 GUY L STEELE JR" if you zoom in. David's project is in the lower right corner of the first image, and you can see his name "LEVITT" if you zoom way in.
Here is a photo of a chalkboard with status of the various projects:
The final sanity check before maskmaking: A wall-sized overall check plot made at Xerox PARC from Arpanet-transmitted design files, showing the student design projects merged into multiproject chip set.
One of the wafers just off the HP fab line containing the MIT'78 VLSI design projects: Wafers were then diced into chips, and the chips packaged and wire bonded to specific projects, which were then tested back at M.I.T.
We present a design for a class of computers whose “instruction sets” are based on LISP. LISP, like traditional stored-program machine languages and unlike most high-level languages, conceptually stores programs and data in the same way and explicitly allows programs to be manipulated as data, and so is a suitable basis for a stored-program computer architecture. LISP differs from traditional machine languages in that the program/data storage is conceptually an unordered set of linked record structures of various sizes, rather than an ordered, indexable vector of integers or bit fields of fixed size. An instruction set can be designed for programs expressed as trees of record structures. A processor can interpret these program trees in a recursive fashion and provide automatic storage management for the record structures. We discuss a small-scale prototype VLSI microprocessor which has been designed and fabricated, containing a sufficiently complete instruction interpreter to execute small programs and a rudimentary storage allocator.
Here's a map of the projects on that chip, and a list of the people who made them and what they did:
Just 29 days after the design deadline time at the end of the courses, packaged custom wire-bonded chips were shipped back to all the MPC79 designers. Many of these worked as planned, and the overall activity was a great success. I'll now project photos of several interesting MPC79 projects. First is one of the multiproject chips produced by students and faculty researchers at Stanford University (Fig. 5). Among these is the first prototype of the "Geometry Engine", a high performance computer graphics image-generation system, designed by Jim Clark. That project has since evolved into a very interesting architectural exploration and development project.[9]
Figure 5. Photo of MPC79 Die-Type BK (containing projects from Stanford University):
The text itself passed through drafts, became a manuscript, went on to become a published text. Design environments evolved from primitive CIF editors and CIF plotting software on to include all sorts of advanced symbolic layout generators and analysis aids. Some new architectural paradigms have begun to similarly evolve. An example is the series of designs produced by the OM project here at Caltech. At MIT there has been the work on evolving the LISP microprocessors [3,10]. At Stanford, Jim Clark's prototype geometry engine, done as a project for MPC79, has gone on to become the basis of a very powerful graphics processing system architecture [9], involving a later iteration of his prototype plus new work by Marc Hannah on an image memory processor [20].
[...]
For example, the early circuit extractor work done by Clark Baker [16] at MIT became very widely known because Clark made access to the program available to a number of people in the network community. From Clark's viewpoint, this further tested the program and validated the concepts involved. But Clark's use of the network made many, many people aware of what the concept was about. The extractor proved so useful that knowledge about it propagated very rapidly through the community. (Another factor may have been the clever and often bizarre error-messages that Clark's program generated when it found an error in a user's design!)
9. J. Clark, "A VLSI Geometry Processor for Graphics", Computer, Vol. 13, No. 7, July, 1980.
[...]
The above is all from Lynn Conway's fascinating web site, which includes her great book "VLSI Reminiscence" available for free:
These photos look very beautiful to me, and it's interesting to scroll around the hires image of the Quux's Lisp Microprocessor while looking at the map from page 22 that I linked to above. There really isn't that much too it, so even though it's the biggest one, it really isn't all that complicated, so I'd say that "SIMPLE" graffiti is not totally inappropriate. (It's microcoded, and you can actually see the rough but semi-regular "texture" of the code!)
This paper has lots more beautiful Vintage VLSI Porn, if you're into that kind of stuff like I am:
A full color hires image of the chip including James Clark's Geometry Engine is on page 23, model "MPC79BK", upside down in the upper right corner, "Geometry Engine (C) 1979 James Clark", with a close-up "centerfold spread" on page 27.
Is the "document chip" on page 20, model "MPC79AH", a hardware implementation of Literate Programming?
If somebody catches you looking at page 27, you can quickly flip to page 20, and tell them that you only look at Vintage VLSI Porn Magazines for the articles!
There is quite literally a Playboy Bunny logo on page 21, model "MPC79B1", so who knows what else you might find in there by zooming in and scrolling around stuff like the "infamous buffalo chip"?
I've been making my way through this book for the past few weeks; just started Chapter 20. I tried reading Harper's Practical Foundations for Programming Languages first, but it was too abstract for me, so I switched to TaPL.
What I like most about Pierce's book is that he introduces each concept with a formal, abstract definition, complete with proofs of correctness, but also follows that up with a concrete implementation in OCaml. The latter is very easy to follow if you've had some experience with the ML family of languages. I sometimes find myself skipping ahead to the OCaml version when I get lost in the math syntax, which for me is less familiar. I'm planning to come back to Harper's book later, but Pierce's book is the perfect fit for where I am now.
My only criticism is that some parts are very dated given it hasn't been updated in almost 20 years. In particular, the version of Java he discusses throughout the book (pre-generics, pre-type-inference) bears little resemblance to the modern one. And since 2002 we've seen affine types (e.g. Rust) start to have mainstream influence, among other things.
In case it's helpful, I'm compiling a list of resources as I learn type systems, logic, category theory, etc.:
Someone here on HN recommended the excellent Computer Engineering: A DEC View of Hardware Systems Design[1], which covers much of the practical engineering & economics side of the VAX and its evolution through contemporary essays.
Reading between the lines, it was clear that DEC never managed to escape the gravity well of being born as a "module company", even as Moore's Law was inevitably pulling all modules of competitive relevance into the microprocessor itself.
In the mid-seventies at Swarthmore College, we were mired in punched card Fortran programming on a single IBM 1130. The horror, a machine less powerful than the first Apple II. My job six hours a week was to reboot after each crash. People waited hours for their turn to crash the machine. I let a line form once people had their printouts. I'd find the single pair of brackets in a ten line listing, and I'd explain how their index was out of bounds. They thought I was a genius. Late one Saturday night, I made a misguided visit to the computer center while high, smelled sweat and fear, and spun to leave. Too late, a woman's voice: "Dave! I told Professor Pryor he needed you!" We didn't know that Fred Pryor was the economics graduate student freed in the 1962 "Bridge of Spies" prisoner exchange. Later he’d learn that I was his beagle’s favorite human, and I’d dog-sit to find steaks for me I couldn’t afford, but for now I feared him. So busted! Then I heard this voice “See these square brackets? See where you initialize this index?” He was spectacularly grateful.
One cannot overstate the rend in the universe that an APL terminal presented, catapulting me decades into the future. I quickly dreamed in APL. For $3 an hour for ten hours (a massive overcharge) I took a professor’s 300 line APL program translated literally from BASIC, and wrote a ten line APL program that was much faster. One line was classic APL, swapping + and * in an iterated matrix product for max and min. The other nine lines were input, output. The professor took years to realize I wasn’t also calling his code, then published my program.
Summer of 1977 I worked as a commercial APL programmer. Normally one never hires college students for the summer and expects them to be productive. The New York-based vice president was taking the train every day to Philadelphia because the Philly office was so far underwater, and desperate to try anything to save himself the commute. He knew Swarthmore had a terminal, and heard about me. At my interview I made a home-run derby of the questions from the Philly boss. The VP kept trying to intervene so he could put me in my place before hiring me. The tough questions were “dead key problems”. How do you write the following program, if the following keys are broken?
Our client was a mineral mining company, our task a reporting system. The reports were 2-dimensional projections of a 9-dimensional database. The accountants wanted all totals to be consistent across reports, and to be exactly the sums of their rounded components. I broke the news to our team that we needed to start over, rounding the 9-dimensional database once and for all, before generating each report. This took a few weeks; I wrote plenty of report generation helper routines. My coworkers overheard me say on a phone call that I was being paid $5 an hour, and at the time I didn’t understand which way they were shocked. I didn’t have much to do, the rest of the summer.
The mining company VP found me one morning, to ask for a different report, a few pages. He sketched it for me. He found me a few hours later to update his spec. He loved the printout he saw, imagining it was a prototype. “It’s done. I can make your changes in half an hour.”
At a later meeting he explained his own background in computing, how it had been key to his corporate rise. Their Fortran shop would take a month to even begin a project like I had knocked off in a morning, then weeks to finish it. He pleaded with me to pass on Harvard grad school and become his protege.
Some Lisp programmers had similar experiences, back in the day. Today, APL just sounds like another exotic language. In its heyday it was radical.
Not sure how "unique" this model is; for example Claris Works was built out of an even more powerful block model (they called them frames) back in the late 1980s:
> We came up with a frame-based approach. Most of the functionality particular to the various application types was packaged up into "frames": word processing frames, graphics frames, etc. These frames were then used as building blocks to make documents of the appropriate types, in a unified programming framework. E.g., a word processing document was essentially a bunch of text frames, one per page, linked together. (Doing this neatly was a big challenge - many subsequent efforts at building a component-based architecture (e.g. OpenDoc) have failed to take into account the top-level user interface requirements.) The result was that not only was most of the code shared across the document types, but the application was also truly integrated - the frames could be embedded in each other. E.g., you could plop a spreadsheet frame right into your word processing document. Text objects in a graphics document had a full-featured word processing engine behind them. The database form editor used the built-in graphics environment. Etc.
> One related cool thing we had was a "shared graphical context" mechanism: sometimes, stuff would wind up being displayed in multiple frames at once. E.g., maybe you're dragging an object across a page break in a document with multiple pages (like this). We developed a general architecture for displaying actions live in multiple contexts. Of course, a lot of this kind of stuff is old hat today, but it was new and exciting in 1989. Some creative programming was required to do these things efficiently on the hardware of the time.
> There were some cool features that didn't make it into the shipping product. For example, originally spreadsheet formulas were much more powerful: you could relate, e.g., graphical object positions and document properties to spreadsheet cells. So you could have the result of a calculation move objects around graphically, or vice-versa. (Further work in this direction led to a novel constraint-based programming paradigm called MOOSE, which I may resurrect some day...)
One of my very favourite programming books. Using it, together with "Threaded Interpretive Languages" https://www.amazon.co.uk/Threaded-Interpretive-Languages-R-G... I wrote a couple of Forth implementations in Z80 assembler back in the 1980s, and a Forth-like language for an Adventure writing system I created when I was first learning C++.
Oh hey neat, some coverage. Actually I'm right on the verge of the next big release (v0.8) which should make doing the networked programming significantly easier.
But here's some more background: I'm co-author/co-editor of the ActivityPub specification, which might give you some idea that I have some experience with trying to build networked systems. Goblins is part of the Spritely project, or even more accurately, the foundation for it: https://spritelyproject.org/
Spritely has some pretty wild ambitions... I won't go into them in detail, the video near the top of the above site explains better. But achieving those in a feasible timeframe means we need some better tooling. Goblins is the foundation of that; it's implemented as a Racket library, but can/will be ported elsewhere. The docs linked above explain some of the core ideas, but honestly I think the video linked from this section of the site explains it better: https://spritelyproject.org/#goblins
In general, people tend to realize that something interesting is happening when they see a cool shiny demo. So here are two pages that might give you an idea with some cool shiny demos:
What? How can the last one be only 250 lines of code? (And a mere 300 more for the GUI!) Well, that's because we're implementing a peer-to-peer distributed object programming protocol called CapTP (which includes wild things like distributed garbage collection over the network (for acyclic references), "promise pipelining", and is object capability secure). The next release will be advancing that work significantly. The Agoric organization is also implementing CapTP, but their stuff is in Javascript instead of Racket; we plan to have our CapTPs to converge. Thus it shouldn't really matter whether you write your code in Javascript or Racket, your code should be able to talk to each other.
Anyway, new release coming out soon. Hope that answers some things since the docs page might not convey what's fully interesting about it (and indeed, neither does this text, but the videos mentioned above do a better job).
I always fancied a database that worked like a dynamic version of prolog, with facts that could be added over time, queries that can use inference etc. I couldn't really find anything that worked like that though, everything seemed to either have a fixed set of facts or didn't support inference. Looking for something like that is what first got me interested in differential dataflow.
The fact that this is bottom up makes me think that you probably can't go crazy with the number of rules.
The IBM Model M hype is absurdly overblown. Anyone who has used decent mechanical keyboards should be able to pretty much immediately notice just how cost-reduced that keyboard is. It is quite literally the cheapest buckling spring keyboard IBM managed to make in '89, and it shows. Cherry boards are often bashed for their flexing and wobbly cases, but the IBM Model M is hardly better with the top and lower parts of the clamshell having significant play. The Model M is a pure membrane keyboard, and thus limited to 2KRO. The Model M has no replaceable parts.
Why is it a membrane keyboard, when previous models were not? Because it is much cheaper to make.
Why does the Model M has only loosely fitting case parts? Because it is much cheaper to make.
Why does the Model M has plastic rivets that hold the membrane stackup together which always break off due to ageing and bad design after 20 years or so? Because it is much cheaper to make.
Why does the Model M have a single-piece barrel plate with barrels that occasionally break off, rendering the entire keyboard garbage? Because it is much cheaper to make.
Why does the Model M have essentially zero spill resistance? Because the cheap design doesn't permit spill resistance. When you spill water on an M, expect to either have it dry for weeks or months as the water evaporates from between the membranes, or to disassemble it. If it wasn't pure mountain spring water, you have to disassemble it, which--due to plastic rivets-- is a destructive proecss. To reassemble it, you have to drill a bunch of new holes through the actual keyboard parts to put screws in.
Yet despite these significant design and longevity issues the M somehow got a legendary reputation for being "solid"... it really isn't.
The Flash "editor" software was a work of a genius. The combination of vector graphics editor, symbol library, layers, animation tools and code was super powerful yet super approachable at the same time. I have learned AS2 just by toying with the examples. Over time I learned AS3, then HTML and TypeScript, in this order.
One more shout out to the vector graphics editor - the way you did boolean operations with shapes based on their color has not been bested since by any other editor.
The flammability of old printers came from a combination of paper dust, worn out printing ribbon and possible presence of a flammable cleaning solution IIRC.
> The sum of a large number of small engineering improvements, coupled with a lot of component integration detail work, topped off by some very shrewd supply chain arrangements.
I think the vertical integration they have is a major advantage too.
I used to work at arm on CPUs. One thing I worked on was memory prefetching which is critical to performance. When designing a prefetcher you can do a better job if you have some understanding or guarantees as to the behaviour of the wider memory system (better yet if you can add prefetching specific functionality to it). The issue I faced is the partners (Samsung, Qualcomm etc) are the ones implementing the SoC and hence controlling the wider memory system. They don't give you detailed specs of how that works, nor is there an method where you can discuss with them appropriate ways to build things to enable better prefetching performance. You end up building something that's hopefully adaptable for multiple scenarios and no one ever gets a chance to do some decent end to end performance tuning. I'm either working with a model of what the memory system might be and Qualcomm/Samsung etc engineers are working with the CPU as a black box trying to tune their side of things to work better. Were we all under one roof I suspect we could easily have got more out of it.
You also get requirements based upon targets to hit for some specific IP, rather than requirements around the final product, e.g. silicon area. Generally arm will be keen to keep area increase low or improve performance / area ratio without any huge shocks on overall area. If you're apple you just care about the final end user experience and the potential profit margin. You can run the numbers and realise you can go big on silicon area and get where you want to be. With a multi-company/vendor chain each link is trying to optimise for some number they control, even if that overal has a negative impact on the final product.
> I am currently unable to work because macOS sends hashes of every opened executable to some server of theirs and when `trustd` and `syspolicyd` are unable to do so, the entire operating system grinds to a halt.
EDIT:
As others pointed out, I put this to my `/etc/hosts` file and refreshed it like so:
For what it's worth, my hacky solution to this is this script which kills all the background processes that use significant bandwidth. If you're interested in how I came up with the list of processes, I can share the BitBar [1] script I wrote for monitoring per-process network usage (I wrote a small wrapper around nettop that logs to a db, which is read periodically by my BitBar script to show me the per-process usage:
Come join us over at http://www.nextcomputers.org/forums/
you can find the part you need to fix that machine.
There's also a NeXT emulator, Previous that works really well.
It's Pascal inspired but certainly not any of Wirth's syntaxes, like 1/2 line if's, semi colons in weird places making blocks look odd. Ada has a more inclusive syntax, whereby "is" or "begin" and "end" encases what is inside it. Compare Modula's:
MODULE X;
...
END MODULE X.
and Ada's:
package X is
...
end X;
Ada's package feels like one statement containing others, whereby Modula's looks like it ends on the first line.
Hey, speech ML researcher here. Make sure you have different recordings of different contexts. fifteen.ai's best TTS voices use ~90 min of utterances, some separated by emotion. If you're having her read a text, make sure it's engaging--we do a lot of unconscious voicing when reading aloud. Tbh, if she has a non-Anglophone accent, you're going to need more because the training data is biased towards UK/US speakers.
If you want to read up on the basics, check out the SV2TTS paper: https://arxiv.org/pdf/1806.04558.pdf
Basically you use a speaker encoding to condition the TTS output. This paper/idea is used all over, even for speech-to-speech translation, with small changes.
There's a few open-source version implementations but mostly outdated--the better ones are either private for business or privacy reasons.
There's a lot of work on non-parallel transfer learning (aka subjects are saying different things) so TTS has progressed rapidly and most public implementations lag a bit behind the research. If you're willing to grok speech processing, I'd start with NeMo for overall simplicity--don't get distracted by Kaldi.
Edit: Important note! Utterances are usually clipped of silence before/after so take that into account when analyzing corpus lengths. The quality of each utterance is much much more important than the length--fifteen.ai's TTS is so good primarily because they got fans of each character to collect the data.
For meta data, exiftool is handy for removing metadata[0].
$ exiftool -all= foo.jpg
And even better, save image first as .bmp or other format that doesn’t support metadata. Then reload and convert to jpeg, and run the exiftool on this image.
Some of the most mildly interesting:
V9543XD Spacecraft collision injuring occupant, subsequent encounter
W5602XD Struck by dolphin, subsequent encounter
X35XXXD Volcanic eruption, subsequent encounter
X52XXXD Prolonged stay in weightless environment, subsequent encounter
Y0881XD Assault by crashing of aircraft, subsequent encounter