Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What are the most interesting emerging fields in computer science?
381 points by Norther on Aug 6, 2018 | hide | past | favorite | 177 comments
Hey HN,

What do you think is the most interesting emerging field in Computer Science? I'm interested in PHD areas, industry work, and movements in free software.




Secure Multi-party Computation.

The basic idea is developing methods for two (or more) parties with sensitive data to be able to compute some function of their data without having to reveal the data to one another.

The classic example is developing an algorithm that allows two people to figure out who is paid more without either revealing what their salary is.

Such algorithms get significantly more complicated if the threat model starts changing from "we're all acting in good faith, but we just don't want to share this private info" to "I'm not sure some of the people involved in this are acting in good faith."

Based on my (admittedly limited) look into this field, it seems like there has been some theoretical progress made here, but there's nothing like a generalized framework or library for general development with it. Instead, practical applications seem to be one-offs. For example, a contractor a while back developed a system that lets parties (nation-states or private space firms) figure out if their satellites are going to run into each other without revealing anything about the location or orbit of their satellites. That way they don't share sensitive data, but they can move their satellites if they're on a collision course with somebody else.

Personally, I got interested in this when working for the government. I was working on an extremely cool data integration project (State Longitudinal Data System grant form US Department of Education) that basically went nowhere because we couldn't get over the legal hurdles to data sharing... If we didn't have to share data, but could still compute interesting statistics about the data, that would have been really cool.


Has zero-knowledge proofs at its core as far as I know. Only a handful of universities teach it even. This was helpful when I was grazing the surface of MPC : https://github.com/rdragos/awesome-mpc

This video course too : https://www.cse.iitb.ac.in/~mp/crypto/mpc2017/


This is an awesome resource, thank you.


I'd second and generalize to basically any area of research into expanding or applying cryptographic protocols.

There was a fascinating area I read about in grad school focused on deniable database retrieval, an extension of oblivious transfer.

I've long forgotten the papers and they had some efficiency issues, but seemed like the work could be revolutionary if expanded to building routing networks that prioritize privacy and anonymity.

Smarter people than me can point out this is naïve or old or broken or limited in application or otherwise not worth pursuing. I thought it seemed fun at the time though.


This is a big deal.

If we can find ways to perform secure, multi-party computation, we could develop fully distributed computational, networking, and power delivery systems.

Your solar roof tiles could be, basically, CPUs or GPUs with embedded wireless networking.


Can you elaborate why solar roof tiles need such intelligence, especially the secure part? I don't see what data they would need to hide.


Distributed power electronics are inherently dangerous. If you have a surplus of power and your neighbor has a deficit, it is desirable for the two systems to communicate to send power from you to the neighbor. If a bad actor can leverage this system to tell everyone to send all of their power to the grid, they could easily damage the infrastructure in a very costly way.


Ok so the grid (that knows everything already) commands every house what to do, using a pinned certificate to authenticate. What part of this needs to be zero knowledge?


It's also important to point out the grid is archaic in many places and the knowledge isn't omnicompetent across the system. In the US, we have three major grids which are also connected to Canada's (which I can't speak too). [0] From my understanding, with an amateur interest, we know a lot about where those major grids connect but very little about the distributed nodes that make up the network; The Northeast Power Blackout of 2003 is a really great example of this. [1] Essentially, there was a grid failure that occurred in Ohio which overlapped into the other sections of the grid. In short, a wire contacted with a tree in Ohio which caused a cascading failure to NYC.

So, let's now bring IoT into the mix. You and I have smart houses, with smart solar tiles. John attacks our tiles plus all our neighbors and directs a major electrical spike towards our local substation. Now it's a physics question, where are all those joules of energy going? It's a heat problem and right now there is no way to dissipate that heat from the system which will melt our substation. Let's say our neighborhood is between several other neighborhoods and the main power station, we just killed a node and guess who else doesn't have access to power because they don't have tiles like we do.

That's the basic concept on why security and authenticity are important in relation to the energy grid. It would be nice if there was an effective way to dissipate an "electrical DDoS"? I'm not sure if it's called something else. If you're interested in the energy dissipation within the energy grid, this is a great question on SE. [2]

That all said, is also a major reason why the government is consistently freaking out about our power grid being hacked.

[0] https://en.wikipedia.org/wiki/Continental_U.S._power_transmi... [1] https://en.wikipedia.org/wiki/Northeast_blackout_of_2003 [2] https://electronics.stackexchange.com/questions/117437/what-...


I'm imagining a world where all the solar roofs have a "Tron Tile" to protect the system.


It would create a truly distributed compute utility that was accessible to anyone within range of the network.

Individual tiles could store and compute and cooperate in a mesh network.

To make something like that feasible, it needs to be secure and reliable. In the privately-owned case, I want to ensure that an adversary who gains access to the “grid” can’t exploit it against me. Think of it as having a raspberrypi on the outside of your house where anyone could tamper with it.

In the public scenario, you would want users to be able to offload computational tasks to an ambiantly available computational resource. Some of that computation will need to be local, and for disaster planning reasons, would be helpful if distributed in nature. But, in order for Joe and Jane Schmoe to rely on such a resource, they would want to know that their data isn’t being stolen or shared with others.


An innovative project called Golem (golem.network) is working on distributed computational power with data privacy and verification. They seem to be ahead of the pack and think this could be the future of computing.


Is secure multi-party computation efficient enough for such use-cases, i.e. shared compute ?


Isn't secure MPC also called "homomorphic encryption"?

Agreed, it is a very interesting area!


No, sMPC is seen as different from FHE (Fully Homomorphic Encryption). Some protocols utilize the partially homomorphic qualities of certain cryptosystems, e.g. Paillier for additive homomorphism.

I view both of these things as methods to perform secure computation on encrypted data. sMPC has a lot of applied research behind it while FHE has been mostly theoretical up until a decade ago. I think we'll see FHE catch up with sMPC soon here as there are technical limitations to sMPC systems that FHE doesn't have.


I think HE falls in the superset of MPC theory but not synonymous.


They are equally-interesting!

I'd love to be able to play with some of these ideas in my current favorite (but alas not mathematically-speedy) language (Elixir), if anyone had any sort of intro guide to HE or MPC, would love to have a look


Michael Jordan in a recent seminar talked about how they were developing a trade-off parameter that modelled privacy. By treating the trade-off parameter as a knob, you can control the trade-off between data quality and privacy compromise.

Similarly Google has started using methods like federated learning to learn from ML models without ever sending your data to the cloud.

The idea is certainly gathering traction, and it will be very interesting where this is going to from here.


> The classic example is developing an algorithm that allows two people to figure out who is paid more without either revealing what their salary is.

Is there actually any way to do that without leaking the salary? If Bob wants to know Alice's salary couldn't he just run the calculation multiple times with different numbers as his salary?


Often these algorithms involve information passing back and forth between the two parties, so Bob wouldn't be able to re-run the computation without Alice's help.


There is a project chasing this that is pretty interesting in my opinion. Golem (crypto token $GNT). https://github.com/golemfactory/golem. Their project became open to the public a few months ago.


Isn't zcash and monero based on similar principles?


Check out this interview I did for the Epicenter Podcast with Enigma, a company building an MPC implementation, for a decent intro to what MPC is.

https://www.youtube.com/watch?v=ajAUByRZGWM


This can also have an exponential impact on financial fraud and anti money laundering. Shared fraud data can significantly thwwart or hinder ongoing fraud rings and such. Banks do not share fraud data today precisely due to the lack of such an algorithm.


That sounds really cool, you don't happen to have any links to the Satellite Collision thing that guy put together?


I think most interesting computer science fields are actually application of CS in other domains.

Science changed a lot in the last decades, moving from a genius in a room looking at the data and coming up with grand theory to have vast amounts of data that no single human can make sense of. The work of the computer scientist is to quickly understand problems from various fields then solve it using tailor-made algorithm that leverage the prior knowledge, the data structure.

One of such interesting fields (which I'm working on), is computational biology. We're working on leveraging sparse experimental data for protein structure prediction. To do that, we end up using algorithms and ideas from different various CS fields, from machine learning, to robotics, to distributed systems. Other people are working on exciting fields like computation protein design, studying drug protein interaction in silico..


Yes! I studied law before CS and now I learn all these algorithms which deal with questions about how to do something efficiently – and these algorithms are unkown by all these people thinking about important questions in this field.

And I think this also applies to other fields. I gave the book "Algorithms To Live By" (which is basically an overview of CS algorithms) to a medicine student and he was immediately inspiried and came up with ideas on how to apply these ideas on his research. CS algorithms are just so basically true that I think they should be more universally known.


Slightly off-topic but I wanted to ask why and when did you start studying CS after law.

I recently graduated from law school and now am an intern at a law firm. I have a strong interest for CS, and it bothered me for a long time that I went to law school instead of CS.

I'v overcome those feelings over the years and dedicated myself to become a lawyer. But your post caught my interest.

I'd be glad if you could share some of the story behind you studying cs after getting your law degree.


After my law degree I worked for a year in a big international law firm (I didn't yet had my license, so I was a kind of trainee – similar to you position right now). I realised that there is a huge interest in technical solutions to make the work more efficient (often labeled as "Legal Tech"). But there was very little actual understanding of technologie, which I think is one of the reasons why there aren't yet many real world application which are really making a difference. That was when I decided to go back to university for a CS degree.

Some learnings so far: 1) I get great feedback for my decision from other lawyers, who are genereally very interested but not well versed in tech. 2) Legal Tech feels a bit overhyped right now, but eventually it will change the field drastically. Law firms need lawyers who have technical skills. And that doesn't necessarily mean a whole CS degree, some programming skills etc. will already do it.

I personally love tech that much that I don't want to go back to a law firm to practise law, but rather actually develop technology. But for you, if you want to become a lawyer, I can promise you that you will find a fertil ground for your interest. It soon will be one of the most sought after skills for law firms. So if you learn some programming (maybe you already know some), take some online courses (there are great resources for CS online), then the next time your law firm gets offered a (as magic advertised) ML tool or needs to implement a new tech solution which really influcences the workflow, you will be the star of the firm for being a critical but competent colleague. Or if you're starting your own law firm, I think there is great potential for a more automated workflow. In your position, I would be very glad for you CS interest – you in the right field and it is the right time for it!:)


Thanks a lot for your detailed and motivating answer. Actually I know quite a bit about programming. But generally try to hide that I'm highly interested in computers. Because if anyone notices, I get asked why I chose to study law in the first place. Over the years I learned not to look like a computer nerd and how to look and behave like a lawyer. I also had some problems with interpersonal communication, but through trial and error I've become socially adept and now I can get things done.

It is really nice to hear that my skills and interest in computers won't go wasted in practicing law. I hope my firm also gets offered an ML tool where I can show my skills. For now I can navigate the document management systems with ease, use some word add-ins (contract companion etc.), I guess that'll change in time and I'll have access to more sophisticated tools.


If you don't mind me asking - whereabouts (geographically) are you based? I'm hoping to get into law after a couple of years working in technology but I haven't been able to find that much information online about the meeting point of law and technology and my searches haven't found me any communities for law similar to HN for technology.


Germany. If you look up Legal Tech meetups (often hosted by law firms) you will find some law students/lawyers interested in tech. But it's still a small community. And for real techies in law, that's an even smaller pool of people.


Cool, what you said about going to do another degree gave me the impression you were probably based in Europe! I'm in the UK myself. Yeah I think that's probably the way to go, thanks for the advice.


I would love to work with you or bounce off some ideas on how to changeup the somewhat legacy legal tech


I'm gonna go one step further with the off-topicness. Do you get much of an opportunity working within law to focus on technology?

I'm about to go back to University to study law but I would love to be able to combine Law with technology. Seems like an interesting area.


Possible jobs include:

- Working as a lawyer in a law firm and being the expert/contact person for any tech stuff.

- Working as project mananger for Legal Tech in a law firm (Magic Circle law firms are already having these jobs)

- Being a lawyer specialised in IT/tech/IP laws, which require a domain understanding.

- Working for or founding a Legal Tech start-up.

- Owning a law firm that is having an automated workflow which is specifically engineered.

Right now, there aren't too many jobs on the market. But they will become more. And I can promise you, for most of all people tech is a black box (which is also bothering people), and with it being more and more integraded in our workflows, being tech savy will become more important in virtually any job (including government etc.).


It's good to hear that there are plenty of technology focused opportunities in the legal profession. The only one I had thought about prior to your comment was specialising in IP and technology, so I will definitely look into those other avenues.


I worked for Legalzoom in the United States for about 5 years, there is a lot of opportunity for disruption, and at last count there were re 500 something startups in the US. A combo of law and tech would be awesome, that's what one of our co-founders who was a developer did. He went back to school and got a law degree.


In Germany computer science is called 'Informatik'. I think this is a better name for what we are doing. It is the science of information and how to efficiently store, process and transform it. The computer is just a tool for it but the methods and algorithms can also be applied in smaller scale and even analog in different areas.


I also did law at University instead of CS as I had originally planned. Completed a conversion MSc into computing last year and now work as a developer.

I'm intrigued at what you are referring to by the important questions in law, as they relate to CS algorithms?


Agreed. CS is the study of translating human needs into formal systems. In a way, it's "applied philosophy".

I think taking CS and "bouncing it off" of other disciplines is where the real magic happens.


Fully agree with this. I'm amazed that even in universities and similar organisations, nobody from Dept X wanders into Dept Y and simply asks "I'm working on this problem, anybody got any ideas?"

It happens, but far too rarely.

Universities understand silos. Supervisors get nervous when a student wants or needs to work with another department. There are reasons for this: supervision and grading become hard, and funding applications become complex. But this simply uncovers the depth of the silo effect.

Arts departments are at the vanguard here. You'll be far more likely to find a fashion PhD working with a biologist than you would to find a comp sci PhD working with a lawyer. Perhaps medicine gets it too, but even then it's largely the lab-based stuff like image processing. The clinical and public health worlds are only starting to gain exposure.

That's my experience anyway, hopefully others have counter experiences.


One field where CS might be put to interesting use is history.

Utilizing machine learning to process and analyze historical texts could shine a light on patterns that have gone unnoticed thus far.


I was looking for a MS in CS (looked in EU; as that's where I want to go for my MS) that would've help me with application of CS in liberal arts. I am specifically interested in fields like literature, history, and archaeology. Couldn't find anything.

I would want something where it doesn't just enabled me use CS tools and algo to apply to works in those fields (a crude example would be: use of some ML also on Shakespeare's works) but also lets me study both that field and CS.


You should look into the digital humanities. Basically, its a big basket, at least on the US side of the pond, where all that stuff goes. The downside is that you are probably not going to get your MS in CS through it.


> We're working on leveraging sparse experimental data for protein structure prediction. To do that, we end up using algorithms and ideas from different various CS fields, from machine learning, to robotics, to distributed systems. Other people are working on exciting fields like computation protein design, studying drug protein interaction in silico..

Are there particular methods you use to deal with little and sparse data?


We're basically merging the sparse experimental data we get with other priors we have (the energy landscape, residue-residue contacts predicted from evolutionary data) in a Expectation Maximization kind of algorithm, where each step you get better predictions (in the sense of they satisfy the experimental data while agreeing with the priors from the problem (low energy, nice fold..).


Nothing in your example is being "leveraged". "Use" is the word you're looking for.


leverage: use (something) to maximum advantage. "the organization needs to leverage its key resources"


Almost all of the answers on this list are not fields that are "emerging" but fields that "have already emerged".

The ideal emerging field is one that's so obscure we haven't heard of it yet, but so important that we will. If there are widely disseminated books on Amazon about your field, it's not emerging. If there are hundreds of professionals cranking out papers about your field, it's also not emerging.

Emerging fields are underrated and under-recognized. What are they?


On the philosophical side, I recently published a paper which could potentially lead to a whole new genre: making actual scientific (=falsifiable) progress on the previously-ineffable question, "Do we live in a simulation?"

"A type of simulation which some experimental evidence suggests we don't live in" https://philpapers.org/archive/ALEATO-6.pdf


The x - ˆx property is very easy to avoid when building a simulator. Most server-grade computers already use error correcting codes for their memory. Or, the simulator could just abort and restart at a recent checkpoint if an error is detected. It's possible to detect errors with arbitrarily low false-negative rate for a small additional cost of computing and storing checksums.

Nevertheless, it's an interesting observation that we can now easily do experiments that demonstrate correct behavior of logic to the 10^-15 level. If Descartes were looking for evidence of the fallibility of a daemon creating his sense data, it would have been hard to demonstrate better than 10^-3 or 10^-4.


You're right of course. Nevertheless there's a difference between saying "the simulating computer probably uses error-correcting codes or something" (speculation) vs. saying "an experiment suggests (same thing)" (science).

To borrow from Nick Bostrom: suppose we run two types of simulations. Important simulations and un-important simulations. For the important sims, we use error-correcting codes, we save checkpoint images, etc. For the unimportant sims, we don't do those things, in order to save money. This allows us to run far more unimportant sims than important sims. Thus, if someone is incarnated randomly in one of the sims, it's probably one of the cheap ones (just because there are more cheap sims than important sims, by basic economics). The point is just to show that it is possible for a philosopher to argue against error-correcting codes etc. Indeed, if we leave it to philosophers, we'll probably never make progress.

We need to appeal to the muse of science, that harsh mistress who serves us cold hard facts, every single one of which throws 50% of philosophers out into the darkness where there is wailing and gnashing of teeth :)


Perhaps I should have asked for the most obscure :) I wonder how many truly emerging fields still exist within computer science. I feel that resiros [1] may be correct in suggesting applications in other areas of science are most interesting / obscure (in the context of that discipline, at least).

[1] https://news.ycombinator.com/item?id=17696498


I disagree. It depends on where you draw the line and that is entirely subjective. For you, its when no textbooks are written, but for the purposes of this discussion I think it would be more useful to talk about fields that are not so mainstream yet but that we have enough material to discuss.

Its just a catch phrase, there is no real objective boundary


I see several responses that are of truly emerging fields. The one I decided to mention, amorphous computing, has been around for a few years but hasn't gained much traction. It's hard to drive discussions around tech that by definition not many know about or understand or have much input on...


Process Mining [1]. When I programmed a rather complex logistics simulator at work I told my coworkers 'whoever comes up with a way of instantiating a simulator from data will be praised forever'. Turned out process discovery is a thing (well, one of _the_ things in PM). And there's so much cool stuff to do and being done. I'm now on the last stretch of my doctorate researching the mining of typical plans in non-competitive environments.

[1] https://en.wikipedia.org/wiki/Process_mining


Have you ever had a look into analyzing processes with a graph database like e.g. ArangoDB. Wonder if that would make sense for your needs. You can traverse along the processes, find patterns or use distributed graph processing with Pregel analyze from different angels. edit:typo


It is not about the big scale of processes that make process mining interesting. But also the tooling that comes with the field, look for example at the tool Disco: https://www.youtube.com/watch?v=pmXZQhFSv10

It provides automatic visualisation of graphs, analysing of bottlenecks, and lots of analytics. While you only need system logs linked to an id.


I haven't, no, but I know there are initiatives in Process Mining that closely relate to knowledge graphs, etc. and wouldn't be surprised if there are groups working on that.

I'm particularly working with the mining of plans (as in Automated Planning) in declarative process models. If I have a chance I'll look into it, thanks for the heads up


AI, machine learning, and neural networks are, of courwe, booming, but I consider them to be hyped.

I consider type theory and formal verification to be more promising (but more academic). Distributed systems and everything having to do with parallel and/or high-performance systems is a good midway between what the industry likes and what's interesting from an academic point of view.


Haha.

Formal verification has been around for 40/50 years and we can't say it is a wide success from a industrial point of view. It has some achievements in terms of results/methods and projects checked, but on a daily basis, pretty much no one uses it. We are ages away of having every programmer understanding formal verification and having all programs verified/proved.

Type theory is in a similar situation. Many issues in code could be solved with basic typing algorithms but people and companies favor languages with poor/no typing (python, Javascript).


The biggest barrier to adoption of formal verification that I have seen as someone just starting in the field (working through Software Foundations and have a number of projects planned with SPARK, Frama-C, and LiquidHaskell) is the lack of groundwork. Verifying just your own code is complex enough as it is but working with libraries without any clear specification of their interfaces and behaviours makes this so much harder.

I think there is real value in having verified libraries or at least libraries with well defined specs so that interfacing with other code wasn't so tedious. I think this issue is starting to be overcome with regard to the usage of strong type systems. Truly strongly typed languages are finally getting the libraries and communities built up so that they don't seem quite as daunting.


To me, personally, the biggest barrier was lack of a proper introduction with a lot of examples.

I try to break this barrier a bit with my upcoming book: Gentle Introduction to Dependent Types with Idris.

I am very interested in this area but it is impossible for newcomers to get a grasp of it without too much digging. Logical Foundations was OK but I was still missing the theoretical explanation ("why does this tactic work? it is magic!").

So with accumulated knowledge from IRC, forums I hope to address this.


Ooh I'll check this book out once I get through the pile of stuff I have right now.

And as you noted there definitely is a lot of "magic" when it comes to the inner workings of theorem proving tactics. I'm slowly figuring all of that out but like you said it definitely takes time and digging at the moment.


The Little Typer is out soon too https://mitpress.mit.edu/books/little-typer


> Formal verification has been around for 40/50 years and we can't say it is a wide success from a industrial point of view. It has some achievements in terms of results/methods and projects checked, but on a daily basis, pretty much no one uses it.

Formal verification is an essential part of mission critical applications. Therefore, even though they might be few in number - their impact is pretty significant and "wide".


That’s interesting you say that. I’ve known two engineers who worked at JPL and they said that no teams did anything close to formal verification. It’s an incredibly difficult bar to meet


She/He is right and I was a bit cynical in my answer. There are some real industrial projects that used formal methods. I have in mind Airbus with ASTREEE, the Meteor subway system (Paris' subway line 14), Windows' drivers with the SLAM analyzer...

Also, JPL released its model-checker Java PathFinder and hosts every year the Nasa Formal Method conference so I'm pretty confident that at least someone at NASA is interested in formal methods =)


We're definitely moving in the direction. Someone already mentioned Rust, and Typescript is gaining traction in web dev. Banks like Barclays and Standard Chartered already have Haskell teams, and I've noticed more and more Haskell jobs popping up over the years (in London). Scala is already realtively popular.

Formal verification is used in some niche areas (BAE, Galois). Proof Engineer is a real role some companies are looking to fill.


Rust looks like a step towards practical formal verification just because of its design philosophy. I think what we're doing is making engineering languages more and more verifiable as research languages become more and more expressive. Eventually they'll meet in the middle and we'll have formal verification in "real life."


No progress in language theory can fix the fundamental problem of software verification: you need a formal specification to have anything to verify. Who wants to write not only a detailed spec for their code, but a spec that has well defined semantics in some kind of logic? Nobody, that's who. There are very few properties that you care about that are both sufficiently easy to encode in a formal specification and not provided automatically by a safe language.


Even more, software development now involves an assumption of the average manager or customer that some extra feature can be added halfway through.

Is the customer going to be happy with "we've billed $X for a formal spec and that means that we can't make the change that you want, that seems simple without $Y dollars for changes to it and the code." Notice that software methodologies have gone the opposite direction here, with Extreme Programming basically aiming to make all of the programmer's activities revolve around exactly what and only what the customer has actually requested.


But the features are not arbitrary. The vast majority of features are common among many applications, and a template formal spec can be built to satisfy those features. Once formal spec of the building blocks is created, there will only be the small portion of unique code that needs a unique formal spec.


In the logical extreme that can't be the case, because a fully debugged program plus a machine to run it on actually satisfy the definition of a formal specification. Deciding what you want your program to do is the eternal burden of programming, but maybe there's a way to make formal specification at least as easy as regular programming.


But then you already have a perfect implementation that you somehow made without the use of formal verification. But you want to introduce formal verification because translating real-world requirements into a formal language is hard enough that you can't be sure of correctness...


One example of something you'd always like to verify is, "this code does not have undefined behavior". This could be the key to obtaining C speed without C's lack of safety. In fact, in some possibly-formalizable sense, it's probably the ONLY way to do that.



You're absolutely right, and that's exactly why I am enthusiastic about these fields. I think there is a ton of potential, and that these fields will be booming once the industry discovers this.

Of course, the point is not to prove every program correct. But it should be feasible to prove security-critical parts correct, especially for large companies.

The biggest problem is that formal verification is about as un-sexy as it gets, since it has no applications an sich.


Not so long ago the same could be said about AI.


And perpetual energy! Guess we, as humans, just aren't great at predicting the future, regardless of what the last 50 years looked like.


> but on a daily basis, pretty much no one uses it.

Maybe not for software but certainly in the hardware world formal verification is common place with mature tools available from multiple vendors.


Machine learning has been around with no success from an industrial point of view until very recently. Until fairly recently, nobody cared and few understood big O notation.

Recent prog languages have added syntax to avoid issues such as "off by on error" (generator etc...), TDD is slowly becoming a standard everywhere. The next step to improve quality in software is formal verification IMO. There is quite a bit of research in that domain and even some academic program languages integrating it within their syntax.


Funny, distributed systems supervisors often warn their students that there is a huge disconnect between theory and practice and if they really want to be in the field.


I was amazed when I took a distributed systems class what exists or is known about but is almost never used. Still, an expert there is probably pretty industry friendly, and someone somewhere must have a distributed objects/CORBA system that can't be dismantled.


Various field of Deep Learning. Right now - Reinforcement Learning.

See: https://www.forbes.com/sites/louiscolumbus/2018/01/12/10-cha... or in general any other marker like NIPS submissions or arXiv preprints on DL.

Of course focus changes, and maybe in the next 2 years it will be on something different than RL. But still, even in Computer Vision it is still a very vibrant field, since its breakthrough in late 2012 (https://www.eff.org/ai/metrics). The majority of more traditional disciplines of CS had their breakthroughs a few decades ago.


Homomorphic encryption is a mind-blower. But I fear that we may never see it in it's fullest glory. It's going to be computationally too expensive or too impractical for reason or another. One can still hope.


Quantum computation gives your homomorphic encryption for free so there is some hope in a quantum-inspired algorithm.


Aren't ring confidential transactions (RingCT), used in some cryptocurrencies a form of homomorphic encryption which is being applied now?


Monero uses ring confidential transactions as of now and zCash's zkSNARKs take advantage of some form of homomorphic encryption.

There are probably more but those are the ones off the top of my head.


DNA computing might be an interesting new domain [1]. The idea is to use DNA as a memory, while using proteins/RNA as logic operators. This can provide massive speed and efficiency gains, especially for optimisation problems that need parallelization. Just consider that 4bits of information on DNA take only about 1nm^3 of volume, where solid state memory has about 3Tb/in^2 which is roughly equivalent to 10^7 nm^3.

To me it is still not clear how scalable the DNA computing is, but there are nice proofs of concept already [2].

[1] https://en.wikipedia.org/wiki/DNA_computing [2] https://www.nature.com/articles/s41586-018-0289-6


DNA computing is going to be huge. Nobody is talking about it but a handful of people are slowly pushing it forward.


I am willing to bet that both DNA computing and DNA manufacturing (organically 3D print things, but like how organisms grow) will be yuuuuuuge.

Not sure when it will have its internet moment, but the universe has been doing this for a long time and once we unlock its secrets, we become a wee-bit closer to Gods.


Algorithmic Game Theory [1]

Going into CS as an undergrad, I didn't anticipate the depth that the field had in other domains -- and for some time, I wanted to double major in {math, biology, economics} to supplement my education.

However, while in the algorithms course, I stumbled upon a connection between linear programming and 2-player zero-sum games (the minimax theorem [2]). Up to that point, I had never considered the idea of using a computational lens to view problems outside of CS, such as "what is the complexity of Nash equilibrium?"

It turns out Algorithmic Game Theory can be applied to study theory of auctions (Why does Ebay use 2nd-priced auctions?) [3], tournament design (Why would a team purposely lose?) [4], or something as basic as routing (Why does building more roads lead to more congestion?).

[1] https://en.wikipedia.org/wiki/Algorithmic_game_theory

[2] https://en.wikipedia.org/wiki/Minimax_theorem

[3] https://en.wikipedia.org/wiki/Auction_theory

[4] https://theory.stanford.edu/~tim/f13/l/l1.pdf


I don’t know for sure, but I certainly hope we’ll see some fresh thinking about user interface design and construction. The past couple of decades seem to have been substantially about recapitualating what came before in the web browser, and while webification has it’s good sides (easier deployment), the actual interfaces for data-entry type tasks still seem as clunky as ever.

AR is potentially an interesting sub-field, but doesn’t seem to be the answer for everything (e.g. those form-like data entry tools...)


I think UI progress is unlikely without good AI, and good AI has to be much better than human to be passable.

(If you're not convinced, try watching how often you have to ask your fellow humans what they meant by a communication and/or a request for information. It's probably more often than you expect - but you give fellow humans a pass because you're used to it, and so are they.)

Either that, or personal data has to stored in a central server so it can be accessed on demand by web apps - which would eliminate a lot of web forms, but would have uncomfortable political and social implications.

There's still room to improve form-based pages, because there's still far too little research into best practice. But forms are an efficient way to collect information, so it's hard to imagine a secure and private UI paradigm that would eliminate them altogether.


Either that, or personal data has to stored in a central server so it can be accessed on demand by web apps - which would eliminate a lot of web forms, but would have uncomfortable political and social implications.

That doesn't necessarily require centralisation. Web browsers have some form-filling capabilities now, and that data can stay under end-user control. Perhaps there's scope for building on something like this (although the growth of, _e.g._ "social login" doesn't leave me too optimistic. That perhaps does count as an example of UI innovation, although one which hasn't particularly registered with me since I tend to avoid it).

There's still room to improve form-based pages, because there's still far too little research into best practice. But forms are an efficient way to collect information, so it's hard to imagine a secure and private UI paradigm that would eliminate them altogether.

Agreed. I don't see easy wins, but trying to make forms as good as they can be seems a very worthwhile area of endeavour. I suspect part of this might be trying not to go too far in terms of baking "business rule" type stuff into forms, which has a tendency to leave people in impossible states (thinking, for instance, of academic grant systems which can end up with some very strong assumptions about career paths built in)



CRDT's looks interesting and are new enough, that there will be more to learn about them. Seems like an important component in distributed systems (almost all new systems)

https://en.wikipedia.org/wiki/Conflict-free_replicated_data_...



About formalisation of laws in code (read: "write down laws as code"), I wonder how far this can go (eg. in a 10 or 20-years timespan). Any idea what are the most suitable parts of law (notably public) where to apply this principle? And how far can companies/startups go without the help of governments in this venture?

Another note on "smart contracts" in the first sense, or as I would put it, trusted computation as way of executing multi-party agreements. This approach already in application in electronic markets for example, and public blockchains seem to be a way to bring this to the masses. But I think it's still hard to say how this can interplay with "wet" decisions (involving a judge or an arbitrator). That's probably one of the interesting questions in this domain.


One interesting approach might be to work backwards from desired practical applications: https://www.gartner.com/smarterwithgartner/gartner-top-10-st...


Personally, I think higher order cognition in AI will be hot. Deep learning has monetized the introduction of AI into numerous mainstream domains (e.g. smart NLP search, vision, speech, game RL), which will motivate and underwrite efforts to push AI beyond the shallow hacks of the past, finally breaking through AI's brittleness problem.

Some problem domains are killer apps, like self driving cars, and on smartphones, personal digital assistants and verbal interfaces. There's no stopping these initiatives. They only question is how far each can go without moon shot levels of investment. But between the economic interests of especially Google and Apple to advance their mobile devices, and the military to make weapons and intel as smart as possible, I'm convinced there's enough critical mass for AI's pile to stay hot for a couple of decades or more.

The trick is to avoid the roadblocks that today's academic agenda inflicts on researchers by demanding they publish frequent shallow incremental novelties. What's needed is 5-10 years to develop the infrastructure that's needed to enable the fielding of robust general reasoning w/ causation and rich knowledgebases.


Within a year or two, we will see grad-level courses at top CS programs in Chaos Engineering and SRE. Just as we've seen the addition of Distributed Systems classes in the past few years with introductions to Zookeeper, Paxos, BigTable, Raft and Spanner. There will be an explosion in academic work on the science of "failure-injection" methods ;)


Supporting evidence: "Chaff Bugs: Deterring Attackers by Making Software Buggier" https://arxiv.org/abs/1808.00659


There are currently huge opportunities in applied computing for people who can break out of the status quo. There has never been such a big gap between what technology can do and what technology culture can't. Of course it isn't easy. As there also never been easier to waste time in technology.


I think anything "emerging" will come from the forms of unconventional computing [1] ... ML/DL/AI are being rehashed on faster silicon hardware, I wouldn't call it hype but it will be better applied to another form of hardware - once it's realized. I personally think reversible computing [2] (once understood) to make the most sense in terms of energy efficiency in CS (much needed) ...

[1] https://en.wikipedia.org/wiki/Unconventional_computing

[2] https://en.wikipedia.org/wiki/Reversible_computing


Out-of-order execution and caches are once again emerging fields, unfortunately.


Emerging as in not leaking through side channels emerging?


Yup, that's what I was referring to.


Not so much science in this list, more personal and practical view:

As a web developer: WebAssembly

As a DevOp: Kubernetes

As a backend engineer: headless (CMS) API systems like Strapi or Wagtail


"Drive mobile and Javascript front ends from wagtail's API"

What is the meaning of headless anyway ?


It's essentially a back-end only content management system that makes content accessible via a RESTful API.

According to wikipedia, the term "headless" comes from the concept of chopping the "head" (the front-end, i.e. the website) off the "body" (the back-end, i.e. the content repository).


Help me create a field of human programming that's informed by computer science? Below is a simplified description of some ideas informing what I do. These days I'm more focused on language and behaviors in myself and primary relationship in preparation for our first child, so it'd be nice if someone else started working on the theoretical stuff. I'm also down for informally experimenting with things anyone comes up with from this.

Here's the basis:

Start with a category theoretical model connecting neuroanatomy and thought (MENS, category theory and the hippocampus). Combine with the concepts of universal embedding and fully abstract languages. Replace computers in the previous sentence with computational model of human; I'm playing with modified versions of differentiable neural computers and perceptual sets comprised of beliefs, emotions, intentions, and behavior/thought patterns. Choose a human language to mathematically hack into a strict subset of itself so it meets the requirements for a target language in universal embedding. I suspect some form of type theory might be needed for that; coeffects seem like they could be useful, as well as quantitative type theory. Use yourself as the primary experimental subject (ie. the test machine) to help guide things and don't worry about reproducible results...trust that you're an ordinary human with essentially the same cognitive functions as everyone else, for now. Explore how this can impact human relationships. Discover ways to organize the self in such a way as to more effectively organize at scale.

Teach the world how to program itself at an individual level.


> Start with a category theoretical model connecting neuroanatomy and thought (MENS, category theory and the hippocampus).

Yeah... um... let us know when you've got that in a form that is really true to neuroanatomy, really true to human thought, and really solid category theory. I'm pretty sure you're not going to get there in this lifetime.


Only because I don't yet see it mentioned, the one emerging field to rule them all: program synthesis. :P


Here's sort of a relevant critique to that idea: http://www.commitstrip.com/en/2016/08/25/a-very-comprehensiv...

Not saying that there isn't merit to the idea. Just saying that program synthesis is more or less synonymous with programming language design when you take into account the challenges involved.


Zero knowledge proofs.

Secure execution environments.


Graphical models[0] & probabilistic programming[1], with the latter making it easier for developers to dive into this growing AI trend. Research in the field for the past decade has been steadily booming with more companies like Microsoft leading the way. I recommend checking out some MOOCs[2] in coursera.

[0]http://www.computervisionblog.com/2015/04/deep-learning-vs-p...

[1]http://probabilistic-programming.org/wiki/Home

[2]https://www.coursera.org/specializations/probabilistic-graph...


An informal HN survey on what some feel is the future vs. over-hyped: https://news.ycombinator.com/item?id=17129481


Functional correctness, formal verification and automated bug fixing.


Since you mentioned movements in free software, "open core" has been emerging in the last 4 years as a viable way to do business while allowing other individuals and companies to build onto your core platform while still being able to monetize your work. For example, if you launch a startup specializing in building foo, you can maintain a library called libfoo and sell a larger foo application or foo plugins or foo services using the open-source library you created.


Open source computing hardware. RISC V[0]

[0] https://riscv.org/risc-v-foundation/


Low power sensor and associated networks.

Its hard to do and has lots of real world applications.


I am certain there will be an emerging field in AI for engineering. Suspension that 'learns' how to keep the car flat; buildings that start shuffling warm air from a to b before it's needed ... things like that. Programming is going to change from "explain how to do it" to "show it what you want", and this has got to be a big deal.


https://en.wikipedia.org/wiki/Self-levelling_suspension

https://www.youtube.com/watch?v=eSi6J-QK1lw

You don't need AI to keep your car's body level. The technology has been around in various forms for over 60 years.

So much of what people believe we need AI for is amenable to classical engineering techniques.


Indeed, couldn't agree more. But quite possibly AI will prove simpler than classical techniques, and more adaptable to i.e. changes in tyre pressure.


I'm not convinced: are you an engineer? To your first example, the Bose suspension doesn't use AI and is already as good as it can get. HVAC already works well and 90% of the time the air is kept at the same temperature +/- a few degrees.


I think you mean "Programming by Example".


Fully Homomorphic Encryption


Amorphous computing has always seemed interesting to me. Computing with emergent phenomenon amongst scatterings of large numbers of unreliable simple processors - like the sort of thing you could mix into paint. It's very young, with lots of fundamentals to be worked out, but that's what makes it interesting!


Wide-scale adoption and promotion of open source software in the industry (e.g. Microsoft, Facebook, Google)


Adversarial Machine Learning. Fake news detection using ML. Integrating 'good old fashioned AI' ideas with modern ML techniques - to some extent Alpha Go went in this direction. While I am glad AI has moved far more towards the machine learning direction, i suspect the decades of AI research that preceded it may come back in a form that is combined with more modern techniques in some way. I see Alpha Go (and Alpha Zero) as steps in that direction. Also, applying deep learning to search engines and making that scale efficiently. I suspect google has partly solved this already, but haven't gone public with any of the tech, but that's pure speculation.


I meant those as 4 separate areas. I don't think my post makes that clear.


Quantum computing?


Yes. Once we build a scale-able quantum computer it will revolutionize so many fields. Simulating chemistry suddenly would become practical. We could broaden our understanding of biochemistry and materials science without designing fickle experiments, just by simulating things. This would be a real game changer and will probably lead to a bunch of breakthroughs on the way to protein-based nanotechnology.


Swarm Computing. The hardware and networking to make it practical and useful exist now, but the field is still in its infancy. There‘s some discussion about its use in autonomous driving, construction, warfare.


What makes swarm computing different from distributed computing?


Swarm computing is about moving, cooperating devices with sensors, possibly AI features. Distributed computing is just a minor aspect of it.


Just as an angle to answering the question (I don't have answers of my own)....

What was the most interesting CS field(s) in 2008, 98, 88, etc?


88 (taking a stab here as I was pretty young):

DTP

OO

RISC CPUs

'graphics' (as in render farms)

98:

Linux, Apache, Mozilla, OSS in general

Perceptual audio compression : MP3 (layer3.org, MP3 vs TwinVQ, codecs created during the period before Fraunhofer announced the source code it uploaded to ISO without a license and that people had been working on for free, in fact had a license and everyone owed them 10 grand).

2008:

Cloud

mobile (location in particular). Think Foursquare vs Gowalla vs Burbn, Grindr, other early mobile location-aware apps. App stores for popularised by Apple that same year.

AJAX, Rails

blogging.


88

OSI network stacks. They were going to replace the 'old' Internet protocols.

Relational Databases, SQL and two phase commit.

Formal methods and verification.


> Formal methods and verification.

Still emerging.


Computer Vision.


Emerging for 50 years, and still going strong


Yes, but it works now.


For computer vision without context (2D images standing alone), we have some nice solutions already, but I think that as long as we keep using the same methods, it will be insufficient for many purposes. Because the truth is that projection of images to a square, 2D grid, and given the complexity of lighting, put us at a situation where we have insufficient information.

And we are already seeing this being heavily developed in autonomous driving systems and others, but I feel like the biggest computer vision applications will require much more information than a 2d image can offer. Instead, recognising objects when you have 3d information seems much more reasonable to me.


Functional programming!

Functional programming languages have several classic features that are now gradually adopted by none FP languages.

Lambda expressions [1] is one such feature originating from FP languages such as Standard ML (1984) or Haskell (1990) that is now implemented in C#3.0 (2007), C++11 (2011), Java8 (2014) and even JavaScript (ECMAScript 6, 2015).

Pattern matching [2] is another feature that is now^2015 implemented in C#7.0. My bet is that Java and other will follow in the next versions.

Here is a list of FP features. Some of which are already adopted by none FP languages: Lambda expressions, Higher order functions, Pattern matching, Currying, List comprehension, Lazy evaluation, Type classes, Monads, No side effects, Tail recursion, Generalized algebraic datatypes, Type polymorphism, Higher kinded types, First class citicens, Immutable variables.

[1] https://en.wikipedia.org/wiki/Lambda_calculus

[2] https://en.wikipedia.org/wiki/Pattern_matching


The use of Lattice Boltzmann Equations to parelleize computation. It’s alreadg being used in doing dynamic fluid simulations but it’s applications are pretty endless. I wouldn’t be surprised to see LBM translated for use in Machine Learning, Cognitive AI, NLP, Computational Biology, etc, etc.


Programatic identification and comprehension of morality in software applications is a fascinating area!


I think the core disciplines are the same, sometimes just some "updates" are happening. Like AI or CV in the last years.

The big changes are happening in engineering (software and hardware).

Many things that were known for decades are now accessible for a broader audience.


Quantum computing and cryptography


Human-computer interaction. The field is not new but the way people interact with computers has drastically changed in the last ten years and will probably continue to do so.


And within that, Distributed/shared UI.


The hype is where the money is and if you look at the established emerging tech, AI and IoT are projected to get most funding and create most disruption in the years to come https://uk.pcmag.com/feature/94662/blockchain-and-robots-buz...

The cutting-edge emerging tech I feel will be in the way we engage with data and tech and Augmented Intelligence (assistive) will see huge advancements.


Internet of Things. It is still in development and there are lots of stuff to work on.


Currently there are a lot of noise in the field, but this is practically how data driven approach is applied to real world with all the sensor data collected. It is ripe for a lot of innovation.


Just another hype. Sensors and internet are here for decades. Internet is ok, but “things” are way too expensive and not reliable yet.


Maybe, maybe not. Smartphones & touchscreens were "here" for decades by the time iPhone launched. Or, take a look at the timeline of "social media": https://en.wikipedia.org/wiki/Timeline_of_social_media


IMO the best IoT is the DIY IoT, the kind that isn't really IoT but rather "I put Wifi on a raspi and connected it to a PCB".

Thankfully the online resources around electronics are plentiful and PCBs can be had for under 10$ incl. S&H.

That way I can make all my lighting IoT without having to deal with the garbage of the IoT industry.


Yes, IoT is great when you can DIY. But when you need to send a technician 2-4 times a year for each node... it’s a problem, not a business.


It's a business for repair technicians :)


direct acyclic graphs for blockchain use. homomorphic cryptography.


Targeted Advertising


Quantum Computing


Computer Networks


This Internet thing is going to be huge-- but it's going to tear us apart. Mark my words.


Blockchain, though no one on hn really understands it. :D


It's an unfortunate characteristic of many in the cryptocurrency community, that they think anyone who doesn't support crypto simply doesn't understand it. And by extension, as soon as they do understand it they will become supporters.

No. There are those who do understand blockchain and still don't support it. A great example is professor Jorge Stolfi. He is one of the more prominent detractors, and yet he routinely displays a very thorough understanding of the technology.


The thing is, 99% of readers here think bitcoin is shit, because of the go to arfuments, that bitcoin is too slow and only used by criminal and by extension, they don't like the blockchain either even though the weaknesses of Bitcoin have already been solved by many blockchains.

But they don't know that, so they keep bashing the blockchain without having any knowledge of other good platforms.


I tried looking up the arguments of Jorge Stolfi, however my Spanish is insufficient.

I don't claim to contradict that he is against the concept of cryptocurrencies or blockchain in general, but I fail to find evidence that he is against the technology in general.

I do find evidence he is opposed to Bitcoin in specific, or at least warns against it.

Could you point me to English writings where he argues against blockchain/cryptocurrency in general?


Perhaps the reason why your Spanish was insufficient for his writings is that his writings are in Portuguese ;)

Here's his primary English writing on Bitcoin and cryptocurrency in general (not necessarily blockchain), sent to the SEC: https://www.sec.gov/comments/sr-batsbzx-2016-30/batsbzx20163...


Thanks for pointing out it's Portugese!

This critique of Bitcoin is quite short, and seems directed at Bitcoin in particular, in no way do I conclude that he is against the concepts of blockchain (say non-currency), or perhaps even cryptocurrencies that do not take on some of the Ponzi aspects.

After reading this I can perfectly imagine (but do not claim so) that he might support certain other forms of blockchains and/or cryptocurrency...


This is his reddit account:

https://www.reddit.com/user/jstolfi

He primarily posts in buttcoin, a sub that exists to mock bitcoiners. It's pretty fair to say he thinks all coins are crap, not just bitcoin.

Also, please include me in the "Understands cryptocurrencies and yet doesn't support it" bucket please :)


It's an unfortunate characteristic of many that conflate cryptocurrency and blockchain as one and the same inseparable idea. Cryptocurrency is only one application of one type of blockchain.


Why do you think so? The math behind blockchain is pretty easy and well explained.


Math is a very small part in what is so good about the blockchain. people don't understand the significance of decentralization.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: