Hacker News new | past | comments | ask | show | jobs | submit | more serhei's comments login

To paint a less rosy picture, many countries are good at keeping out US based tracking / social media monopolies because they want to support their own tracking / social media monopolies. It's kind of a "well, duh" moment that China is going to support its own Internet services rather than let that data be collected by people in the United States.

Russia has a similar government-backed Internet services ecosystem going on, although they dropped the ball on blogging and wound up having to do a complex operation to buy Livejournal (where all the Russians were) and move the servers to Russian territory.


>To paint a less rosy picture...

I think different companies, although they may be monopolies in their own right, spread out over multiple countries is still preferable to a global monopoly. Not just because of data collection, also regarding keeping tech know-how locally for example.


It really isn't. Competition globally is better for consumers. Imagine if you had to drive only US cars to keep automotive knowledge local.


What Facebook (or for that matter Google) does is not competition. They're far too large to really compete with when you're getting up and running. Everything that even gets close to something they do or might want to do gets bought out.

I can't imagine anyone thinks such a situation is in the best interest of the user.


When they buy other companies, note:

1) It's a voluntary transaction and is not compulsory

2) They precisely buy because it would be harder to build it from scratch. Which refutes your claim that Google or FB can't be competed against.

edit: formatting


1) Yes, it's voluntary, but the interests of the consumers aren't necessarily taken into account. Only the shareholders of the companies being bought stand to gain from the transactions. 2) Google/Fb gives these companies an offer at a moment when it still makes sense to buy them (i.e. when they can both win from the acquisition). If the company being bought decides to decline, Google or FB or whatever monolith has ways of destroying them: - By buying their competitor instead of them, and heavily investing in them - By just building the feature themselves, and potentially fighting a very expensive legal battle


Are you taking the general community's well being into account when you ask for a raise in your salary? You know, if and when you do ask for a raise you're making the product your company produces that much more expensive, you're not adhering to your own rules.

In any voluntary transaction, both parties gain out of it. People who are not parties to that transaction have no business in it.

> 1) Yes, it's voluntary, but the interests of the consumers aren't necessarily taken into account. Only the shareholders of the companies being bought stand to gain from the transactions

Yes of course, because they are trading their private property. When you buy vegetables from the supermarket, can someone complain that you're not taking into consideration other people's interests, namely there's less food for everyone else. And also that only you and the supermarket stand to gain out of the transaction. This is an reductio ad absurdum of your statement

> 2) Google/Fb gives these companies an offer at a moment when it still makes sense to buy them (i.e. when they can both win from the acquisition). If the company being bought decides to decline, Google or FB or whatever monolith has ways of destroying them: - By buying their competitor instead of them, and heavily investing in them - By just building the feature themselves, and potentially fighting a very expensive legal battle

Okay, I see your logical problem, you want good to be done in one instance, but you IMHO fail to see the consequences if you logically apply that concept to its conclusion.

Again, voluntary transaction. If the startup refuses, it's voluntary. If FB or Google goes and invests elsewhere it's their money, their business. Who are we to tell them what to do with their money. Would you be okay if they told what to do with money you've earned? All things you've mentioned are coercive i.e. they are all based on things people have agreed on in written documents. Concretely, if they (Google/FB) buy a competitor, it's their money, their business. If they build the feature themselves, their money, they don't owe this startup their money, they refused when offered anyway. If they fight a legal battle, it's still legitimate because both parties (Google/FB and startup) have all agreed to work with local laws (US or otherwise).


Please, at least one example of a social networking application where "everybody is" and which makes Facebook fear the loss of it's users.

For example, give us at least one example of a service that would directly cause Facebook to think twice before making it's app constantly harass me to update my page, minimising the videos I am trying to close, and automatically sending me push notifications for someone starting a live video?


> Please, at least one example of a social networking application where "everybody is"

Everybody is not there, even on Facebook, there is no such social network yet.

> and which makes Facebook fear the loss of it's users.

Whatsapp, Instagram and Snapchat. Facebook has paid premium prices for the former two and gave a pretty high offer for snapchat. They wouldn't do that if they're concerned ( of course, neither you nor I can confirm their emotional state, but we can derive useful conclusions from their actions)

> For example, give us at least one example of a service that would directly cause Facebook to think twice before making it's app constantly harass me to update my page, minimising the videos I am trying to close, and automatically sending me push notifications for someone starting a live video?

Have you stopped using facebook? Then why should they care? They will care IMO, but the point is by continuing to use facebook, you've shown that you weigh the benefits of facebook to be more than the costs/annoyances. So, you're happy enough as far as they're concerned, they might try and make you happier but that's left to them :)

Edit: Whoops! I hit submit before completing my last sentence


It's harder to stop using Facebook when there's no proper alternative.

More importantly, the functionality of Facebook can be replicated by anyone with more-then a few years of experience in software.

So you're saying that Facebook does have proper competition? No, they don't. They were lucky enough to be at the exact place at the exact time to build the sturdy population of users. And now most of the users are paying for their luck.


>What Facebook (or for that matter Google) does is not competition.

Why not? Being the first one in and the best at what you do is still competition. Most industries don't get disrupted by a competitor until there is truly some valuable innovation that happened. (ie look at Yahoo and MySpace)

Whenever the government starts to interfere it sounds really good on paper, but it typically is not in the best interest of the user. Existing corporations often use laws to then further eliminate incoming competition.


Cars are not really a valid analogy here because we're talking about services with network effects that impose behavioral norms for all users regardless of location.

To use your driving example it would be like saying that to use a car everyone has to drive on the same side of the road or accept identical pollution control standards. That's obviously not the case. Cars are adapted--sometimes quite substantially--to local jurisdictions.


Korea and Japan actually did this with cars. Enforcing extremely high tariffs on all imported vehicles. It seems to have worked well for their automotive Industries.


The US economy was founded on tariffs. No such thing as a free market in North America.

https://en.wikipedia.org/wiki/Tariffs_in_United_States_histo...


Tariffs in cars in a big market such as the US encourage foreign car companies to set up locally or at least in a NAFTA member country.


So?


So you get competitive technologies built in your market with local workforce by foreign companies.


My comment is about the US not being for free trade. Your comment is irrelevant.


AFAIK the US also has significant tariffs on certain kinds of imported vehicles, namely pickup trucks.


I think it's a tariff on commercial vehicles in general. Which Ford worked around by shipping over "passenger cars" and then tearing out the extra seats in the US, turning them into vans http://blog.caranddriver.com/feds-watching-fords-run-around-...


Can USA wake up one day and decide to impose a 40% tariff without repercussions? Aren't there rules and international trade agreements backed by enforcement mechanisms?


> Can USA wake up one day and decide to impose a 40% tariff without repercussions? Aren't there rules and international trade agreements backed by enforcement mechanisms?

You could debate how far the US could go with it before there are serious repercussions. However, several of the big auto-making countries go a great distance out of their way to shield their own domestic auto industries from foreign competition at home, including Germany and Japan. The US should behave exactly as they do and for exactly the same reasons.

What has the US gained by allowing foreign auto competitors such equal footing? Japan and Germany are widely regarded as having the two best auto industries overall, it has clearly has worked out just fine for them (insert the replies about correlation/causation). Why is the US held to a very different standard than China, Germany and Japan (3 of the 4 largest economies) when it comes to trade? That should and will end by necessity, whether it's a Trump or a Bernie Sanders that does it. It's a cultural wave that can't be stopped.

The US should use every lever at its disposal to compete, including strategically manipulating the global reserve currency (eg it's crazy that the US isn't running a $2 trillion, ten year, infrastructure QE program; the USD as the global reserve currency is unlikely to last more than a few more decades, we should leverage it while we have it to increase US competitiveness).


You're ignoring the US consumers who get cheaper and higher quality vehicles from abroad because we don't have protectionist politics on vehicles. Why should I pay more for inferior Detroit cars?


Monoculture = bad. Rich companies are pretty good at kicking the ladder. Little bit of anti-trust goes a long way.


Monopolies are bad for consumers. The Chinese government is protecting Chinese markets from global monopolies (and also acting as the world's biggest innovation VC). Other countries have been asleep at the wheel and are only just starting to wake up with uber and airbnb. (Also the fact that fb has been caught targeting Kremlin propaganda to voters in foreign elections makes me wonder if they will get shuttered in a lot of places. And Twitter can only be next.)


There's a difference between protecting from global monopolies, and totally eliminating any competition.

The fact that the Chinese sanctioned companies become monopolies themselves is even more proof that it's about nationalism than protection from monopolies.


> it's about nationalism than protection from monopolies

Well if I was Chinese I guess I'd hope my government was making some effort to protect the national interest at least.

Of course they are developing their own monopolies and have some pretty terrifying social control programs in the works. That can be criticised in its own right and hopefully it will be. But to try and make the case that FB is a benevolent saviour of the Chinese citizenry from their evil government is nonsense and whataboutery. If you must believe in the US as some kind of global beacon of free speech and democracy, then at least don't think that just being the lesser of two evils is a really great way to promote this.

The hard problem is that any proprietary social platform has to have a mass of people using it to be particularly useful, and any social platform that big is collecting data that makes it a huge security risk and economic factor at the national level. The only solutions I can think of are to have social networks under national control or to promote interoperability between smaller networks and partition data collection to an acceptable level for national security purposes.


> The Chinese government is protecting Chinese [control over citizens] markets

> and also acting as the world's biggest innovation VC

Wew.

> Other countries have been asleep at the wheel

Calm down with the CN propaganda.


It's a general question of national security for any country, and in the broader sense which also includes control of their economies. The Chinese have been very smart about this, other countries should look at this. I don't think that's pushing a pro-Chinese line so much as about other countries needing to learn something about protecting national interest from 'foreign data invaders' (which have been mainly US to date but which I can see starting to include Russia and China)


thank you for espousing the free markets ideology (I concur with you). I don't know why but these days, I feel HN also has become protectionist (where I'd have expected free marketeers).

Perhaps this is out of context, but I wonder if Trump is a symptom of this protectionism or if they're unrelated


There are only two countries which have homegrown companies protected by their governments. OP's comment still holds true about US companies not necessarily being good for everyone.


The alternative of all the data siloed to a single world power, the US, is not better.

I was going to add ".. unless you're American", but I'm not even sure if such a concentration of power would be a net neutral (or good) in the long run, for any citizen, US or elsewhere.


Have a look at https://sandstorm.io/


Seems kind if close to my description... Need to try it out. Wonder why I never hear of people using this; maybe my ignorance.


> After a lot of lunacy including running around in the backyard and driveway looking for a spot of coverage, we finally settled on a system of handwritten IOU notes, and most people had to buy in twice and have an appointed stranger promise to send them BTC later. > > I would argue that there were few people savvier than our group when it came to Bitcoin, and we were having this experience? I realized I was maybe a little too immersed in the hype to see where Bitcoin was falling short in usability.

Ironically, this is not too different from how the financial industry solved the problem of having trouble moving large quantities of precious metals. Lots and lots of IOU notes. Eventually, the IOU notes were so ubiquitous that the underlying precious metals could be dispensed with entirely, and we got fiat.


The reasoning on this guy's website is often pretty fascinating: http://www.worlddreambank.org/P/PLANETS.HTM -- these are entire fantasy planets. For a warm-up exercise, he started out by tilting the Earth in different ways and working out how that would affect climate and biomes.


It's worth noting that conventional railways tend to sacrifice a bit of speed (relative to what you could engineer them for) in favour of being flexible in terms of how much passenger capacity can be provided, supporting a mixture of local/express services, and so forth. So the 'sweet spot' for a conventional or high-speed rail line between two locations may tend to be larger.


That's pretty similar to how languages like Scala with higher-order features translate down to the JVM bytecode (which is subject to certain restrictions).


In general, if some target language entity exists in a named form only, then the corresponding anonymity of those entities in the source language must be simulated via gensyms.

For instance, the branch targets in a while loop are anonymous, right? But in the target language, you must have a named label for the instruction. Solution: machine-generated label.


The next boom will be in whatever _isn't_ being hyped right now.


I believe hype is actually Musk's primary product: by constantly coming up with these announcements he plays his part in maintaining the narrative of the US being the leader in tech innovation, over and above the narrative of the US being a formerly cutting-edge economy that is in the process of falling behind. (Which one is actually true is debatable, but managing perceptions is an important part of keeping the economy running.) Someone who merely organized solid electric car manufacturing or workable private space launches would probably not be as successful in attracting investment.


I certainly don't look forward to auditing a Solidity contract with the complexity of the federal tax code.


It sure makes me wonder if Ethereum would do better with a less forgiving programming language. The fact that the syntax resembles Javascript is not reassuring, neither is the fact that the very first code snippet in "Solidity By Example" [1] is littered with comments like this:

        // Forward the delegation as long as
        // `to` also delegated.
        // In general, such loops are very dangerous,
        // because if they run too long, they might
        // need more gas than is available in a block.
        // In this case, the delegation will not be executed,
        // but in other situations, such loops might
        // cause a contract to get "stuck" completely.
[1]: https://solidity.readthedocs.io/en/develop/solidity-by-examp...

In general, the easier the code is to read and the harder it is to write, the better. (Force the programmer to think carefully, not the reader!) Anything that gets a comment like that in the Solidity examples should at the very least refuse to compile without the programmer adding some attention-grabbing _UNSAFE annotation. Better, there should be mechanisms to make sure the code is written in a way that everyone understands the consequences of e.g. running out of gas in the middle of a function.


> It sure makes me wonder if Ethereum would do better with a less forgiving programming language.

This will be hard. While Solidity certainly has problems unto itself, some of its insecurity comes from the EVM's design, which is almost laughably low level and thus very hard to reason about. It certainly doesn't seem to be informed by modern VMs like LLVM, JVM or BEAM, which know a great deal more about the semantics of the program they're running and have things like dispatching features. My guess is the approach was "Bitcoin with a few more opcodes" and therefore more like a 80s-era CPU than a "VM".

As a result, the compiler is tasked with running the whole show. Add to this the coupling of RPC to Solidity's mangle-check-and-jump dispatch approach, and you start to see why there's been so little innovation in this area: Solidity has a tight grip on the Ethereum ecosystem. Also, writing a compiler to this substrate is not easy, and you're penalized for code size (there's a limit on how big a contract can be).

I'm opinionated, as an author of a competing smart contract language (http://kadena.io/pact) that runs in an interpreter, is Turing-incomplete, has single-assignment variables etc etc which we think makes a lot more sense for the kind of stylized computing you're doing on a blockchain. We even have the ability to compile our code to SMT-LIB2 for use with the Z3 theorem prover and will be talking more soon about our DSL for writing proofs. Interestingly though, we find that choosing the right domain for your language goes a long way towards safety AND expressiveness, so that you're not constantly cursing your compiler/interpreter while also worrying less about $50M exploits :)


Oh awesome, glad you mentioned your project. I've actually been writing a blog post that parallels a lot of your complaints right here about Ethereum. For safety as well as implementation complexity, Turing completeness is a significant disadvantage for Ethereum, and I'm sorry to see it played up by so many as an advantage. Making a Turing-incomplete DSL need not be limiting and inexpressive, as you clearly know (and have shown).

I was thinking about creating a functional contract DSL that Coq could extract to. Not as familiar with Z3 and SMT solvers, but that's clearly another good approach to safety (I think Ethereum has finally started looking at formal verification after the DAO debacle, not sure what the status is. EDIT: looks like they've made a good bit of progress the past few months).

What about the status of your project? I see you've bootstrapped the contract language and a consensus protocol, do you plan on bootstrapping a P2P network? Or have you done that already and I missed it?


Pact is live and in-use in enterprise settings on our permissioned blockchain platform; you can download the interpreter which when launched with "-serve" presents the RPC REST API, allowing you to write whole applications with just the interpreter; indeed there's a sample "TODO MVC" app showing just how easy this is (see https://github.com/kadena-io/pact/blob/master/README.md).

As for a public platform, stay tuned, we will have an annoucement very soon. Suffice to say for now: any work you do using Pact will have a public chain to run on in the very near future; you can use the O/S releases to develop Pact; and if you have an idea about permissioned (think "B2B only") blockchain applications, get in touch!


Ethereum is a flawed design, no question.

But how does kadena's Pact compare to Ivy[1] by Chain[2]? The goals seams quite similar.

[1] https://blog.chain.com/announcing-ivy-playground-395364675d0... [2] https://chain.com/


Ivy is different, for one that it intends to be a trans-compiled lang to Bitcoin and other substrates. It shares Pact's focus on public-key authorization, due to their shared debt to Bitcoin scripts as design inspiration.

Perhaps the main difference though is Ivy's language focus on financial-type transactions, shared by many other languages in this space. This seems logical for a smart-contract system, but in Pact we identified that many blockchain applications will be totally non-financial (supply chain, healthcare), and that a database metaphor is the most important feature.

Indeed, SQL is Pact's biggest influence. After all, SQL doesn't need Turing-completeness, mutable variables, unconstrained loops. With Pact, we sought to end the war between SQL and stored procedures (most DBs use a different dialect for SPs than SQL) with "one lang to rule them all".

Database-orientation also powers one of Pact's most important features: the ability to run a blockchain node writing directly to an RDBMS like Postgresql, Oracle, whatever. This way you don't have to write tons of smart-contract code to integrate with legacy systems.


> After all, SQL doesn't need Turing-completeness, mutable variables, unconstrained loops.

Minor nitpick: Standard compliant SQL, due to recursive common table expressions, is actually turing complete. Whether that's good or bad...


What would be the complications in providing a transpiler for pact into Solidity?


It's possible, but impractical:

- Pact executes in a runtime environment that at a minimum ensures any ED25519 signatures on the transaction are valid, such that code can then test the validated public keys to enforce authorization rules. So this would need to happen before each transaction, which would assuredly be super-expensive gas-wise.

- Pact modules (i.e., contracts in the Solidity sense) export functions on-chain that can then be imported/invoked by other modules by name, thus allowing safe inter-contract communication, on-chain services, and other nice things. This model is very foreign to Solidity where inter-contract communication is poorly supported; best practices there dictate copy-pasting approved code (c.f. ERC 20 tokens) and hosting all functionality in-contract.

- Pact being interpreted is hugely valuable on-chain, as you can directly inspect human-readable code, as opposed to EVM assembly/opcodes. This is more of a philosophical point though.

- Other things, like supporting direct storage of JSON objects in the database, exporting Pact types as JSON (which you get for free in the Pact runtime), key/value db functionality, transaction history at the db level, support for governance in upgrades -- all of these would need to be coded in as Solidity code, with great computational expense.

The biggest issue facing Solidity developers today is the sheer cost of best practices: ensuring you handle overflows right (ie don't use math primitives but call an approved function), planning for upgrades/data export, you name it: you have to use that code and pay that gas. The environment really needs to provide a lot more "free" functionality than it does today to change this reality.


> best practices there dictate copy-pasting approved code

Not entirely true. In Solidity, you don't need the whole code to call other contracts, you just need their interface (function signature) and you can call any contract.

You'll see all the best practices use interfaces these days.

Agree with all other points, especially about the math safety - there needs to be more support for financial math too.


Hmm ... would love to see an example of Solidity contracts calling pre-existing Solidity contracts as a best practice, especially given the difficulty of verifying the state of code on the blockchain.

In Pact, when you load a module, all references are aggressively resolved and directly linked to the calling code. In Ethereum, if the contract you're calling doesn't have the interface you thought it did, you won't find out until you actually call the code.

My understanding was you really can only trust your own code in Eth, that you can't rely on a pre-uploaded contract (like a safe math contract) -- and you certainly can't extend one safely.


So I can't send a contract in your language some signatures as byte arrays and have it validate them in its logic? Any program that signs things also needs to be able to produce block chain transactions? Just an initial question, I'll read more on your site.


We haven't seen the use-case yet where the (signature,payload) tuple is not isomorphic to a transaction. Yes, in the case of multiple, distinct payloads, you'd have to break those into separate transactions, but that seems like a very specific use-case that doesn't sound very "transactional".

Pact's philosophy sees a blockchain as a special-purpose database, for distributed transactions, so it's not designed for many "normal" database cases, namely bulk operations, searches, heuristics, etc. The use case of accepting multiple signed payloads sounds suspiciously "batchy" to me. Also, Pact is "success-oriented": we see failures like a bad signature as something that should fail your tx only. This is a way of avoiding dicey control-flow logic.

So, if a single payload is what you need the signatures on, you simply design your contract API/function call to have the specific function to handle that data (store it in the database, whatever), and let the environment check the signature.

EDIT: Pact is actually `([Signature],payload)` -- ie, you can sign the same payload multiple times


Signing the same payload multiple times would work for my use case (channels). I also need to accept transactions signed by at least one of two keys. I suspect this might be possible too. However, I can imagine that anything more complicated would go outside of the system you have designed. I haven't had the chance to learn your language, but I would be wary about it either being too limited for edge cases that most real world stuff is going to have, or turning into a "universal framework" antipattern.


> I also need to accept transactions signed by at least one of two keys.

Keysets are designed for precisely this; what's more this rule can now be persisted.

> anything more complicated would go outside of the system you have designed.

Always a possibility of any PL, especially running in a blockchain. Pact makes less assumptions about use cases than most however. It's imperative, allows functions and modules, and offers a database metaphor. That handles a fair number of things.


I'm impressed by Pact. Thanks for sharing. Is there a particular reason you made authorisation via keysets a primitive in your language/infrastructure?


Bitcoin was the inspiration, in identifying a fundamental aspect of blockchain being "authorization by verifying signatures on a transaction." Keysets are a primitive to avoid making multisig a "special case": anywhere you have one signature, in Pact you can have multiple. But in truth, keysets are only part of the picture: Pact runs in an environment that is required to previously verify all signatures on the transaction, and then simply provide the corresponding public keys in the environment.

The idea here is "auth is easy": you don't have to worry about what curve the sigs are (Pact supports ED25519 now but the lang and API support adding whatever you need); you don't have to handle bad signatures (they immediately abort the tx); all you have to do is define a keyset.

Lastly, the reason for the primitive is to have them be inviolable data that can be store in the database, for later use for voting, row-level auth, whatever you can think of.


Heck, the basic methods of writing provably correct programs have been explained in plain English since at least the 70s:

https://www.amazon.com/Discipline-Programming-Edsger-W-Dijks...

https://www.amazon.com/Science-Programming-Monographs-Comput...

This is not some rocket science type verification with a dependently typed theorem prover language, it's fairly simple paper and pencil logic. It should not be hard to adapt it to Solidity specific concepts like running out of gas.

The reason these techniques are mostly ignored is that the techniques don't scale at all to large programs calling APIs with imprecise semantics (e.g. filesystem, network), and most people would rather publish imperfect software and iterate rather than spec everything up front. Well, unlike most software, contracts are not large, their semantics are meant to be 100% precise, and most people would rather take the time to make sure a contract does what it claims to do rather than discover a bug afterwards. I would hope.


Calling it "imprecise semantics" is quite the understatement.

The environment software runs in is often scarcely understood at all. Operating systems and web browsers change without notice due to auto-upgrades. Libraries are often used without understanding their implementations, and they're also constantly being upgraded. Users can install plugins that introduce bugs that can't be reproduced in the test environment.

You can't build an accurate mathematical model of an environment you haven't observed. Integration tests (run against many platforms) and production logging help, but there are still plenty of unknowns.


How do you normally handle those unknowns?


Imitating other code that is known to work [1]. Lots of testing. Fixing the bug when someone runs into it and complains (a viable last-resort for almost anything besides a Solidity contract).

[1]: For example, when working with filesystems, people write code that they saw other people using. The code may or may not work as designed depending on the specific filesystem. see e.g. https://danluu.com/file-consistency/


Users submit bugs and the reply is sometimes "cannot reproduce." :-)

But the more serious projects I've worked on use analytics and have semi-automated ways for users to send you stack traces and logs when they notice a bug.

Also, it's helpful to have a continuous integration setup that automatically runs integration tests on many platforms.


Usually, by crashing or exhibiting bugs and misbehavior.


Exactly, this is a language that they want to run a new economy, not just payment processing, but banking, title transfer, legal resolutions, and the metric it was judged on was "looks like javascript" and not say:

* Compiler time safety (type safety, capabilities, rust and/or haskell style features).

* Built in formal verification features to ensure a function does what the developer (and reader!) thinks and no cases are missed.

* Explicit language design (so that the compiler isn't even required to do strong safety: every action must be spelled out, even if a compiler could deduce it).

* Paying attention to the fucking history of the field (and not doing things like: insane name mangling, case sensitive semantics (not even just names by convention fucking semantics are case sensitive), hard to reason about VMs). Like come on people, at least learn from 20+ year old mistakes.


Solidity has far worse problems than not being an advanced research language. Just being a sanely designed normal language would be a big step up. Solidity is so riddled with bizarre design errors it makes PHP 4 look like a work of genius.

A small sampling of the issues:

Everything is 256 bits wide, including the "byte" type. This means that whilst byte[] is valid syntax, it will take up 32x more space than you expect. Storage space is extremely limited in Solidity programs. You should use "bytes" instead which is an actual byte array. The native 256-bit wide primitive type is called "bytes32" but the actual 8-bit wide byte type is called "int8".

Strings. What can we say about this. There is a string type. It is useless. There is no support for string manipulation at all. String concatenation must be done by hand after casting to a byte array. Basics like indexOf() must also be written by hand or implementations copied into your program. To even learn the length of a string you must cast it to a byte array, but see above. In some versions of the Solidity compiler passing an empty string to a function would cause all arguments after that string to be silently corrupted.

There is no garbage collector. Dead allocations are never reclaimed, despite the scarcity of available memory space. There is also no manual memory management.

Solidity looks superficially like an object oriented language. There is a "this" keyword. However there are actually security-critical differences between "this.setX()" and "setX()" that can cause wrong results: https://github.com/ethereum/solidity/issues/583

Numbers. Despite being intended for financial applications like insurance, floating point is not supported. Integer operations can overflow, despite the underlying operation being interpreted and not implemented in hardware. There is no way to do overflow-checked operations: you need constructs like "require((balanceOf[_to] + _value) >= balanceOf[_to]);"

You can return statically sized arrays from functions, but not variably sized arrays.

For loops are completely broken. Solidity is meant to look like JavaScript but the literal 0 type-infers to byte, not int. Therefore "for (var i = 0; i < a.length; i ++) { a[i] = i; }" will enter an infinite loop if a[] is longer than 255 elements, because it will wrap around back to zero. This is despite the underlying VM using 256 bits to store this byte. You are just supposed to know this and write "uint" instead of "var".

Arrays. Array access syntax looks like C or Java, but array declaration syntax is written backwards: int8[][5] creates 5 dynamic arrays of bytes. Dynamically sized arrays work, in theory, but you cannot create multi-dimensional dynamic arrays. Because "string" is a byte array, that means "string[]" does not work.

The compiler is riddled with mis-compilation bugs, many of them security critical. The documentation helpfully includes a list of these bugs .... in JSON. The actual contents of the JSON is of course just strings meant to be read by humans. Here are some summaries of miscompile bugs:

In some situations, the optimizer replaces certain numbers in the code with routines that compute different numbers

Types shorter than 32 bytes are packed together into the same 32 byte storage slot, but storage writes always write 32 bytes. For some types, the higher order bytes were not cleaned properly, which made it sometimes possible to overwrite a variable in storage when writing to another one.

Dynamic allocation of an empty memory array caused an infinite loop and thus an exception

Access to array elements for arrays of types with less than 32 bytes did not correctly clean the higher order bits, causing corruption in other array elements.

As you can see the decision to build a virtual machine with that is natively 256-bit wide led to a huge number of bugs whereby reads or writes randomly corrupt memory.

Solidity/EVM is by far the worst programming environment I have ever encountered. It would be impossible to write even toy programs correctly in this language, yet it is literally called "Solidity" and used to program a financial system that manages hundreds of millions of dollars.


> Despite being intended for financial applications like insurance, floating point is not supported

That's kind of a feature. Sure you can use decimal floating point (but never, NEVER use the common binary float for money), but storing integers of the minimum currency unit (e.g. cents) (typically wrapped in a Money class in OO languages) is also a good option.


Nitpick: Most finanical packages work on 1/100ths of a cent, not on cents. Otherwise yes, everything money-related should use fixed point and be really careful about over/underflow.

Although one fairly well-known package, produced by a place I once worked briefly, which (when I worked there) internally used doubles for all money values, wrapped in a class that re-rounded the results every so often. No, really.


You don't want to use floating point numbers to represent monetary amounts, however financial applications often work with numbers that are not money. Consider risk modelling.


Do you think this really belongs on a blockchain, as a transactional environment? There's a notion that things like greeks and other non-linear inputs are best fed as inputs/oracles, for a number of reasons: 1) avoiding stochastic stuff on-chain 2) assurance, so you know what your inputs were later 3) impracticality of all that computation on-chain 4) dependence on market data. Of course there are simple things like imputing an option price from the stock with just delta and gamma, but a fixed-point decimal here wouldn't really hurt you; basic calculations like payment schedules would seem to benefit from fixed-point. But mainly, blockchains would seem to represent transactions and workflows primarily; analytics seem ill-suited for the high-assurance, database-write-heavy environment.


My knowledge of Solidity comes from reading the docs. It doesn't seem to support fixed point arithmetic either. The phrase "fixed point" appears in the ABI spec but nowhere else, shrug. Maybe half implemented? I guess you can implement it yourself as it does support bit shifts, assuming they aren't buggy too.

I pass no judgement on what belongs on Ethereum. I know from their website that they advertise it as a platform for general app programming and even implementing entire autonomous businesses. It clearly cannot support these things.


The Ethereum VM is not for general app programming. It's really not your typical environment. EVM contracts get executed on every network node, and it must return the same results everywhere.


> it must return the same results everywhere.

We solved that for floats like 10 years ago. Let alone the fact there are better formats, like posits, or fixed point numbers, that also solve this problem very easily.


I agree with many of your criticisms (and have other ones for Solidity as well), but the lack of floating point is absolutely a feature, not a bug. There's no reason for floating point in a contract language. Floats should never be used for monetary values or counts, and you're not going to be doing numerics on the blockchain.


Technology will evolve.

Good points.


or "hey maybe we shouldn't have memory corruption errors and 'optimizations' that randomly change static values to god knows what" will be met with a bunch of cryptocurrency koolaid-chugging mouthbreathers screeching that these are features and not a bug and now we have Etherium "It's In The Contract That I Can Screw You Over, Get Owned Nerd" Classic, Etherium Floating Point Is A Bug And Not A Feature Edition, and Etherium I'm Excited To See What They Fucked Up And Will Have To Fork Next


What is the insane name mangling and case sensitivity that you mentioned there?


`Transfer` is an event, `transfer` is a function. This [0] is from TheDAO attack and is one of the (many) bugs making the attack so terrible.

As for name mangling, read this [1] and see if it seems sane to you.

For bonus points [2], `this.foo()` and `foo()` mean two wildly different things.

I don't even know what they were thinking.

[0] http://vessenes.com/deconstructing-thedao-attack-a-brief-cod...

[1] http://solidity.readthedocs.io/en/latest/abi-spec.html#

[2] https://github.com/ethereum/solidity/issues/583


>In general, the easier the code is to read, and the harder it is to write

Do you have any actual basis to back this up? My counterpoint would be Golang, which is designed exactly to be simple, and is usually really easy to read.

As in, I haven't found another language where jumping into a library and reading the internals is easier than in Golang.

EDIT: A counterpoint is JavaScript, a language which I use in my day to day, and similarly has quite simple syntax. But I can have trouble understanding what is going on depending on the tools used in the local environment.


I think you might be misinterpreting what GP is saying by trimming the end of the quote; they're not saying that making something easier to read makes it harder to write, they're saying that making something easy to read but hard to write is a worthy goal.


Also, I think "hard to write" is meant as "require that critical or dangerous details are written explicitly; if a feature adds convenience for writing at the expense of reading, it should be avoided". (Type inference, overloads and reflection come to mind)

I think (hope) that no one is advocating making a language verbose or complex for it's own sake.


Like Rust? Rust does most of those things wrt explicit dangerous behaviour.


Exactly.


Thanks, I edited my comment to be more readable.


Golang is actually a good example of harder to write imo. It has good tooling that makes life easier, but unused variables, unused imports, and a lot of other things are errors. And if you do somewhat standard linting on top, it gets even more tedious.

Frankly if it weren't for the tooling I'd not be very sold on Go. The tooling totally sells it for me.


Nobody prevents you to write contracts in a more restrictive language. For example: https://github.com/ethereum/viper

Or create your own. I am sure the creators of Solidity are aware of its limitations and quirks. But as far as I can tell they felt they had to ome up with something fast. And it grew from there.

But as I said: Feel free to create your own language for the EVM if Solidity does not fit your needs or requirements. With a system allowing Turing completenes it should be possible to create a language that removes Turing completeness (for more security). It would be impossible the other way round.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: