> a crappy but cost-effective approach with a high margin of error may rise to prominence.
The tech startup that eventually creates that will call this "efficiency". This type of 'solution' is exactly what capitalism creates.
> I would love to have a real-time dashboard/HUD with measures of disorganized speech patterns or affective intensity
> It would be hard to make it not distracting
Not only would it be distracting, it could also bias you in unexpected ways.
> I would love to have a real-time dashboard/HUD with measures of disorganized speech patterns or affective intensity,
I already don't trust a lot of the mental health industry because of the very bad experiences[1] I've had in the past. The easiest/fastest way to guarantee I never visit a psychiatrist again is to start using that kind of "AI" tech without first showing me the source code. "Magic" hidden algorithms are already a problem in other medical situations like pacemakers[2] and CPAP[3] devices.
> make up for the trade-off of not being in the room
Maybe what you need isn't some sort of "AI" or other tech buzzword. It sounds like you need better communication technology that doesn't lose as much information.
--
On the more general topic of "AI in psychiatry", I strongly encourage you to play the visual novel Eliza[4] by Zachtronics. It's about your fear of a cheap, high error rate system with an additional twist: the same system also optimizes your role into a "gig economy" job.
The major advantage of a language that isn't Turing complete is not having the major risk inherent to Turing complete languages: asking if any non-trivial program will produce any given result or behavior is undecidable[1].
> write a program that runs until the heat death of the universe even in a Turing-incomplete language.
The Halting Problem is just a simple example of program behavior. The undecidability extends to any other behavior. Asking if a given program will behave maliciously is still undecidable even if we only consider the set of programs that do halt in a reasonable amount of time.
When you are using a regular language or deterministic pushdown automata, questions about the behavior or even asking if two implementations are equivalent is decidable. It is at lest possible8 to create software/tools to help answer the question "is this input safe." When you use a non-deterministic pushdown automata or stronger, you problem becomes provably undecidable*,
I highly recommend the talk "The Science of Insecurity"[2].
> The major advantage of a language that isn't Turing complete is not having the major risk inherent to Turing complete languages: asking if any non-trivial program will produce any given result or behavior is undecidable[1].
It's not obvious to me that I should care about this property in a configuration language. For a given configuration use case, I probably have a good idea about the extreme upper-bound for a correct program--say, 5s. If the program runs for 30s, the supervisor kills it.
> The Halting Problem is just a simple example of program behavior. The undecidability extends to any other behavior. Asking if a given program will behave maliciously is still undecidable even if we only consider the set of programs that do halt in a reasonable amount of time.
What's a malicious action in a configuration use case that a Turing complete program could muster but not a non-Turing-complete program?
You know what? I'd give the whole superheroism gig a try. Keep an eye on the comic book store shelves for "The Phantom Clenching Adventures of ASS-LESS CHAP!!".
> The problem is that the people using said libraries don't know what they're doing.
I would like to suggest that the problem is that the skilled people that understand how to write crypto systems are providing libraries that are easy to misuse. Instead of providing a library that is likely to be used incorrectly without a lot of specialized knowledge, provide infrastructure that manages the crypto so the average developer doesn't need to become an expert on crypto systems.
HTTPS is a useful example. Instead of providing webapp authors a library of cypher/hash functions and warning that they shouldn't roll their own transport layer security and then acting shocked when those authors try to use that library and make a lot of mistakes, we instead separate the crypto step into a separate layer of infrastructure so the average webapp author can easily use crypto without having to learn a lot of specialized knowledge. Someone writing a Ruby on Rails app shouldn't have to write functions like pervade_encrypt($data)/pervade_decrypt($data) to move out of the plaintext world of HTTP and utilize encrypted transport. They only need to buy/LetsEncrypt a cert they can install in their webserver. They can even delegate that to their hosting provider.
"If it's possible for a human to hit the wrong button and [cause a catastrophic failure] by accident [or inexperience], then maybe the problem isn't with the human - it's with that button. [...] People make mistakes, and all of our systems need to be designed to be ready for that."
> I would like to suggest that the problem is that the skilled people that understand how to write crypto systems are providing libraries that are easy to misuse. Instead of providing a library that is likely to be used incorrectly without a lot of specialized knowledge, provide infrastructure that manages the crypto so the average developer doesn't need to become an expert on crypto systems.
I agree, but for the encryption basics, those libraries are available. Higher level languages like Python/Ruby/etc. all have libraries that do TLS transparently. Lower level languages have libraries like BoringSSL/GnuTLS/OpenSSL/(Windows|macOS) APIs available which are relatively straight forward; you need to do error handling yourself so there's a whole bunch of API calls, but that's because an application needs to deal with errors and thrown exceptions aren't always available/desirable.
In the example, the code authors wanted to encrypt a file and then sign it using their own algorithm. There are open source utilities for this, of course, but they chose to implement it their own way. They didn't build a bad implementation of AES, they didn't write a bad hashing algorithm, they didn't mess up RSA.
They could've done this perfectly safely by switching the order around, adding a second key, and picking a better file format, they were very close! They could've prevented all these problems by just using GCM instead of CBC+HMAC inside their own format.
Personally, I would've used GPG and skip the custom implementation all together. This is a software product that exposes an endpoint that will execute the command in a GET query, so why not use a bit of exec() to do all the weird crypto for you.
> HTTPS is a useful example. Instead of providing webapp authors a library of cypher/hash functions and warning that they shouldn't roll their own transport layer security and then acting shocked when those authors try to use that library and make a lot of mistakes, we instead separate the crypto step into a separate layer of infrastructure so the average webapp author can easily use crypto without having to learn a lot of specialized knowledge. Someone writing a Ruby on Rails app shouldn't have to write functions like pervade_encrypt($data)/pervade_decrypt($data) to move out of the plaintext world of HTTP and utilize encrypted transport. They only need to buy/LetsEncrypt a cert they can install in their webserver. They can even delegate that to their hosting provider.
For TLS you still need to configure a proxy. You need to make sure not to paste the private key in the cert field or every browser will receive your private key; you need to have _some_ crypto knowledge and this is the easiest process you can probably think of. You also need to check your auth/blacklisting/security code to accept the proxy and make sure you use the right headers for checking the remote user IP against blacklists etc. Enabling TLS is easy, but it's still not a transparent process, simply because it's not an easy task.
A single-browser web, like any monoculture, dramatically increases the potential damage of common mode failure. With everything - even important infrastructure and services - becoming directly or transitively dependent on the the web, we should be diversifying the browser ecosystem to limit the scope of a class break[1].
That is only true if-and-only-if we pretend those 13 bits are the only identifying information being sent to Google when requesting a font. The HTTP request is almost certainly being sent to Google wrapped inside an IP protocol packet. For most[1] requests, there are at least 24 additional bits (why 24? see: [3]) of very-identifying data in the IPv4 Source Address field. More fingerprinting can be probably done on other protocol fields, and IPv6 obviously adds an additional 96 bits. Yes, IP addresses are not unique, but ~13 bits is easily sufficient to disambiguate most hosts on a private network behind a typical NAT. Correlating the tuple {IPv4 Src Addr, x-client-data} received on a font request is trivial: it only requires a user to login to any Google webpage that includes a font request.
>> re: your [1]
A given Chrome installation may be participating in a number
of different variations (for different features) at the
same time. These fall into two categories:
Low entropy variations, which are randomized based
on a number from 0 to 7999 (13 bits) that's randomly
generated by each Chrome installation on the first run.
High entropy variations, which are randomized using
the usage statistics token for Chrome installations
that have usage statistics reporting enabled.
How many users have 'usage statistics reporting' enabled, and are there for a "High entropy variation"? Is it enabled by default and thus will only be disabled by the minority of people that know how to opt-out?
[1] Google reports[2] they currently see about a 60%/40% ratio of IPv4/IPv6.
The solution to this type of intentionally incompatible product is to return to a legal and cultural environment that respects adversarial interoperability[1]. If a company doesn't want to implement the features people want[2], some other company should be able to provide their own (possibly reverse engineered) implementation.
Trying to restrict competitors from making interoperable products is admitting you don't want to participate in a well-running competitive market and instead deserve monopoly power.
It was kind of a brilliant move by Shimano, in retrospect. Since Shimano did share with them how to do everything, it makes it much harder to now go and implement those features after the contract has been invalidated.
Seems like a good time for SRAM to realize they should open up info on how to let others create apps for the Hammerhead devices to let indy devs figure out how to reverse engineer Shimano's system.
> I could not enter my own apartment because my phone was dead
Whenever I hear about "smart" devices as a replacement for something that is safety/security critical (like a lock), the question of what happens when the internet and/or power fails is rarely even considered. Does the lock fail open or closed? Does the door open if there is a fire in the building that damages the internet/power wiring? If it fails open, does that mean someone can bypass the lock by simply cutting the network/power cables outside the building?
There might be reasonable answers to these questions at a large business building that can afford fallback options, but I'm not sure there are good answers for e.g. residential situations.
Residential smart locks I've seen are wireless, with batteries and a keypad, so any networking (zwave, zigbee) or lack thereof doesn't affect that basic operation. And egress is never blocked by anything.
If the batteries die and you need to get inside, you need to have a physical key or an alternative ingress.
Paula: "...what's that?"
Blank Reg: "It's a book!"
Paula: "Well, what's that?"
Blank Reg: "It's a non-volatile storage medium.
It's very rare. You should have one."
The tech startup that eventually creates that will call this "efficiency". This type of 'solution' is exactly what capitalism creates.
> I would love to have a real-time dashboard/HUD with measures of disorganized speech patterns or affective intensity
> It would be hard to make it not distracting
Not only would it be distracting, it could also bias you in unexpected ways.
> I would love to have a real-time dashboard/HUD with measures of disorganized speech patterns or affective intensity,
I already don't trust a lot of the mental health industry because of the very bad experiences[1] I've had in the past. The easiest/fastest way to guarantee I never visit a psychiatrist again is to start using that kind of "AI" tech without first showing me the source code. "Magic" hidden algorithms are already a problem in other medical situations like pacemakers[2] and CPAP[3] devices.
> make up for the trade-off of not being in the room
Maybe what you need isn't some sort of "AI" or other tech buzzword. It sounds like you need better communication technology that doesn't lose as much information.
--
On the more general topic of "AI in psychiatry", I strongly encourage you to play the visual novel Eliza[4] by Zachtronics. It's about your fear of a cheap, high error rate system with an additional twist: the same system also optimizes your role into a "gig economy" job.
[1] a brief description of one of those experiences: https://news.ycombinator.com/item?id=26035775
[2] https://www.youtube.com/watch?v=k2FNqXhr4c8
[3] https://www.vice.com/en/article/xwjd4w/im-possibly-alive-bec...
[4] https://www.zachtronics.com/eliza/