Very topical speech from the Fed on regulations (current and future) on AI in Banking industry.
tldr; Opaque AI models are OK in some areas such as fraud prevention and cyber security. But in other areas like consumer credit scoring, transparency is key.
What if they had access to the IP that is currently locked away by the likes of Apple/Google?
What if anyone who wanted to replicate a good design (and improve upon it) were able to do so without restrictions, or fear of lawsuits?
Conversely, what if what we treat as the sacrosanct right to make profits over a one-time invention (as if noone else could ever come up with a similar idea on their own - while history has repeatedly shown otherwise) was applied throughout history? Would even railways / electricity / simple things we take for granted now - have been so widespread? Maybe we would be flying in counterfeit airplanes.
>What if they had access to the IP that is currently locked away by the likes of Apple/Google? What if anyone who wanted to replicate a good design (and improve upon it) were able to do so without restrictions, or fear of lawsuits?
Then nobody would invest any money in releasing stuff and we would still live in the stone age.
Here's an interesting article about 19th century Germany when no copyright laws existed yet:
The Real Reason for Germany's Industrial Expansion?
Did Germany experience rapid industrial expansion in the 19th century due to an absence of copyright law? A German historian argues that the massive proliferation of books, and thus knowledge, laid the foundation for the country's industrial might.
> The ship's velocity can then be represented by a rotation of the axes; the rotation of the time axis is shown in Figure II. (Readers who find Figure II puzzling should recall that a diagram of an imaginary axis must, of course, itself be imaginary).
The ancient humans would then also (eventually) develop patents because someone somewhere would have thought about it even back then - and with worldwide distribution - it would catch on.
Effectively, what I'm saying is - when faced with a common greater good that is a game changer for humanity, humanity then has to collectively want the greater good. And humans can be selfish - ancient or not.
I could be wrong, but I don't think that's how it works. You're comparing the true size of UK with the flatmap projection of US. To do what you want, you need to compare the true maps of both countries - with the cities.
No, because this map scales the country according to its latitude. So when you drag down England to be next to Atlanta, you're looking at the size of England with respect to Atlanta. The "true size" is just "true" in comparison to whatever it's on top of. So it should be accurate.
Battle tested in the past != fit for future wars. The nature of systems has evolved. Attackers are more sophisticated, have better tools and probably understand C code better than most who still write it. So our tools need to evolve too. Admittedly rewriting full libs into another language sounds scary, but hey - since when did fear of the future stop it from coming?
1. Memory safety issues are the cause of a very large number of security vulnerabilities (often most of them for projects written in C or C++, depending on the software).
2. Memory safety-related issues have a relatively high probability of being turned into remote code execution, which is one of the most if not the most severe outcomes.
3. C and C++ projects have been empirically observed to have orders of magnitude more memory safety problems than projects written in other languages do.
4. The additional classes of security vulnerabilities that managed languages tend to foster do not have the combined prevalence and severity of memory safety problems.
So, we would be better served in security by moving to memory-safe languages.
> C and C++ projects have been empirically observed to have orders of magnitude more memory safety problems than projects written in other languages do.
Note that this includes projects in languages that are themselves written in C or C++, which shows that there's some value in confining the unsafe code to a small and well-tested core library (in this case, the language runtime). Honestly, it seems like 50% of the value just comes from not using C strings, since pretty much every other language has its own string library that does not use null-termination.
1) Runtime overhead for some form of GC (D, Lisp, etc)
2) Rephrasing a program to satisfy a memory constraint checker (Rust)
3) Disciplined memory usage (i.e. Nasa C coding guidelines)
We don't have enough experience with 2 to indicate whether it will create new classes of bugs. We also don't understand the knock-on effect of managing memory differently - will functionally identical programs require more or fewer resources, more or fewer programmers hours, etc.
Rust may very well be the future, but we don't know for sure yet.
One thing we do know: options 1 and 3 have been available for years, but not widely utilized. What lessons can we learn from this fact to apply to Rust?
Logic bugs. Failure to correctly adapt imperative algorithms while still satisfying the constraint checkers.
Not all security bugs are related to memory. Many are related to improperly written algorithms (most crypto attacks), or improperly designed requirements (TLSv1).
Even Heartbleed was primarily due to a logic bug (trusting tainted data) instead of an outright memory ownership bug.
Does Rust automatically zero out newly allocated memory? Honest question, I don't know the answer.
> Logic bugs. Failure to correctly adapt imperative algorithms while still satisfying the constraint checkers.
Oh, also: If you're implying that Rust's ownership discipline can create security bugs where there were none before, I consider that a real stretch. I'd need to see an actual bug, or at least a bug concept, that Rust's borrowing/ownership rules create before accepting this.
> Not all security bugs are related to memory. Many are related to improperly written algorithms (most crypto attacks), or improperly designed requirements (TLSv1).
Nobody is saying that Rust eliminates all security bugs. Just a huge number of the most common ones.
> Does Rust automatically zero out newly allocated memory? Honest question, I don't know the answer.
This is a problem that will be there equally in all languages
Perhaps less so in languages with a better type system, but that doesn't affect Rust since there aren't any _systems_ languages with a better type system.
You most definitely know a lot more about code than me. So I'm not challenging you at all. But my contention is that robustness is a function of resilience. C has a class of errors which are hard to spot for foot-soldiers, and sometimes even generals which can leave deadly chinks unspotted for long. If Rust attempts to do away with those specific type of errors altogether, what's wrong with that? Rewriting code.
And if that seems like a challenge worth attempting for some people, I can't fault it. And in the process maybe we will discover more bugs, or maybe something better. Its evolution, no?
It's not a bad thing that Rust addresses these issues. That's good, and essential for newer languages---it wouldn't make sense to not try to solve these problems in a language that intends to be lower-level (like Rust), relatively speaking.
The paper on the Limits of Correctness that I mentioned above does a good job at arguing my point. Even if you rewrote glibc in a language like Coq and formally proved its correctness, that doesn't mean that it's "correct" in the sense that its logic mimics glibc---there could be logic errors.
So you might gain confidence (or guarantees) in rewriting glibc in Rust, but in rewriting it you potentially introduce a host of new issues.
This is brilliant. I work in finance (trading) - and have been trying to learn Clojure and Rust on my own over the last year (no CS background).
For people like me - this is phenomenal - if it works.
The ability to think logically doesn't translate directly into the ability to program, or vice versa and a person must spend a considerable amount of time learning the tools of programming. This tool could fill that gap for non-CS engineers.
Couple of questions (as you mentioned functional language)
a) In the opinion of the makers - what is this language NOT good for?
b) How good are the mathematical libraries going to be? As a benchmark I'd take Mathematica (not for speed - but width and depth of mathematical functions). If this question is redundant due to the ability to call Haskell - feel free to say so.
c) How far does the automatic parallelism go? Is it truly distributed over any scale?
Thanks again
Hello! First of all - thank you for a nice greeting! :)
Answering your questions:
a) This language is NOT good for any really low-level stuff like programming realtime hardware systems or low-level processors programming. You should then use pure C or assembler instead. If you are programming other low-level stuff, you could feel comfortable using Luna's textual representation. The visual one gives incredible boost when designing the high-level architecture.
b) At the current stage Luna ships with a very limited standard library, but we will work hard in order to incorporate as many usefull libraries as possible. As you have spotted, Luna is able to directly call Haskell and we also provide foreign C interface, so binding any Haskell library is almost out of the box right now.
c) We are strongly working on this topic. Currently our graphs are able to run in parallel all operations that are not data-dependent in any way. However it does NOT scale well if you're thinking about distributed systems, because it's not easy to determine where to partition your program, so sending data over network will not kill the performance. We will though be working toward addressing it in longer perspective.
Thanks for replying!
(a) and (b) make sense. (b) will depend on adoption as well, so your answer is fully understandable.
Just a follow-up question on (c) - if you will allow me please. Are the data structures immutable like Clojure and Erlang, so that they could be distributed eventually, once the optimizations are done (and you are able to package the non-functional code like C into their own partitions for the distribution)? Reason I ask is that the page mentions Category Oriented Programming - which I am not at all familiar with.
Regards
I would really love to answer any questions regarding Luna, so feel free to ask them as many as you need!
c) Yes, Luna supports immutable data structures in the form of composable algebraic data types. Composability means in this context that you can compose the sum and product types directly using Luna's built-in combinators, so you can for example build data types that share some constructor definitions (this is just a generalization of algebraic data types in general. As a side note, the first Luna release lacks some mechanisms to fully handle this abstraction, but this is understandable as this is an alpha release). Luna data types allow for efficient binary serialization, if you mean it by asking about "own partitions for the distribution". The Category Oriented Programming paradigm, on the other hand, is developed by us. It is widely described in the Luna user manual, that for now will be available for alpha testers only, but we will work towards releasing it to the public as fast as possible.
tldr; Opaque AI models are OK in some areas such as fraud prevention and cyber security. But in other areas like consumer credit scoring, transparency is key.