I can confirm the same experience too (my company uses both AWS and Google Cloud) where most of the human touch points related to startup credits are with the sales team for Google Cloud instead of an account manager (we still don't have one after about a year now) if you manage to get pass their usual generic responses.
Agree on FHIR APIs are becoming ubiquitous especially with Apple Health pushing its adoption in consumer facing applications, not so much on the legacy enterprise side from what I've seen thus far (I work primarily in the clinical trials side of things) and we prototyped our own integration stack to help with wrangling with different standards (more like the lack thereof). It is worth actually building that ourselves in the early days as we have learned so much about the challenges to integrate with our partners and will at some point looking into interoperability partners as we scale.
The scientific process typically involves making inferences and conclusions based on an observation, whereby the result often include biases and in some cases even ill-constructed hypotheses that are caused by assumptions and imperfect information.
One way to reduce 'fraud' and also increase accountability of the individuals and organizations is to secure raw data at its source which the veracity of the data that will be subsequently used to support downstream conclusions can be easily verified and attributed to its source. I'd say a bottom up approach here is more holistic as it incentivizes people to produce high quality research that future work can then be reliably be built on top of.
Love the format and illustration, though I feel having a high-level comparison between different Hyperledger projects (especially Fabric versus Sawtooth) is probably important here. Happy to contribute/collaborate on this.
Faust looks really cool with its native Python implementation. How does this compared with Apache Beam or Google's Dataflow as they have recently rolled out their Python SDK?
Faust is a library that you can import into your Python program, and all it requires is Kafka. Most other stream processing systems require additional infrastructure. Kafka Streams has similar goals, but Faust additionally enables you to use Python libraries and perform async I/O operations while processing the stream.
One of the core tenets of decentralized applications is the notion of using programmatic models (i.e. smart contracts) to enforce trust between multiple digital entities that maintain relationships that are often adversarial in nature, which are very common in enterprise businesses – think vendor/client relationship where the obligated parties are bound by legal contracts to fulfill a set of predefined responsibilities where the vendor expects a payment for the services provided to the client and the client expects to receive a reasonable quality of the service rendered by the vendor.
This doesn't mean we should put every legal contracts into the blockchain, but rather implementing blockchain-based solutions starting with the lowest hanging fruit problems that will reduce inefficiencies in the short term. The benefits derived from this will then induce a paradigm shift in how we build businesses and applications that can further leverage the benefits of the increased efficiency (capital liquidity, faster/cheaper transactions, etc.), which will eventually give rise to novel business models and use cases that are not possible and fathomable today.
Excellent work. Do you have plans to open source the scripts/implementation details used to reproduce the results? Would be great if others can also validate and repeat the experiment for future software updates (e.g. TensorFlow 1.8) as I expect there will be some performance gain for both TPU and GPU by CUDA and TensorFlow optimizations.
Sidenote: Love the illustrations that accompany most of your blog posts, are they drawn by an in-house artist/designer?
Happy you like the post! The implementations we used are open source (we reference the specific revisions), so reproducing results is possible right now. We haven't thought about publishing our small scripts around that (there's not much to it), but it's a good idea. There's also work towards benchmarking suites like DAWNBench (https://dawn.cs.stanford.edu/benchmark/).
The illustrations are from an artist/designer we contract from time to time. I agree, his work is awesome!
Basically the finance industry is trying to shoehorn their legacy business systems into blockchain, which of course will not make any advantage obvious other than finding out it is much more difficult to reimplement what is already working with a different system. My view on this is that blockchain ought to find a very specific problem (e.g. trust as a service) that cannot be achieve with existing technology.