Honestly, we need a few AI coders to replace most of the developers in the world and then this won't be an issue. Bounds checking arrays and calloc instead of malloc isn't rocket science. It's a simple formula.
The problem isn't the language it's the developers.
A change in language does not completely solve the problem. Heartbleed was caused by buffer re-use without zeroing in between uses. A high performance network application could very easily do the same in another language.
That's a different "problem", not to mention a flaw in the application code's design rather than an example of one of the language's building blocks being fundamentally able to allow the entire execution path to be subverted.
OK, so in C you can smash the stack and that's bad. True. But I think you fail to see the larger point I'm attempting to evoke: that array bounds are artificial in the first place, and this doesn't just surface in C.
The "heartbleed in rust" example is a great one, and it arises in real life in many high level language APIs for file I/O and sockets. You have an allocation, and you have a count of available bytes coming back from a read() function which may be lower than the allocation size. So you are creating a "virtual" array bound from nothingness. Fail to respect it (without bounds checks) and you will see bugs.
If you reject that this is a valid way to write code, maybe in your API every read() style function will always return the correct size enforced by your JVM or whatever, but you will do too many allocations and over-tax the GC.
If you accept that this makes sense, then you must embrace a more C style way of thinking, where array bounds are created and destroyed at will and must be enforced through your own actions... And suddenly you see the other side of this coin, which reflects valid and true things about the universe, that you may want to chop up a buffer into multiple pieces - and that's OK.
(Now, I wouldn't be surprised if Rust has mechanisms to chop up arrays in the way I describe and enforce the bounds you provide it... Which would be handy. But frankly does not completely destroy the validity of the C approach or substitute for a proper understanding of it. Without that understanding, you will code more heartbleeds.)
His point is that you're not forced to do that. And anyhow, that doesn't solve the issue since you can bungle the creation of the slice with the wrong offset or length.
Not bungle in the sense of overflowing the underlying buffer, but overflowing the logical buffer that is contained within it, i.e. getting the wrong slice.
My point is, a lot of people are spending time on this when it doesn't matter. In the limit that AI starts replacing human developers these subtle differences in language approaches zero.
New languages here and there every day. Replace this replace that. When, in the end everyone is simply reinventing the "wheel" over-and-over.
This sounds a lot like "why clean up my room when the heat death of the universe is coming anyway?", but if you do think that AI is going to supplant all of programming then be the change you want to see in the world! Get building and we'll see which one happens first.
I honestly don't know who I'd put my money on between "AI takes over the world" and "programmers stop writing buffer overflows"
My guess is that if they make a general purpose programming AI then all other jobs will also be nonexistent besides being famous and doing YouTube reviews of movies. My thought is that the problems in the way of AI programming are more difficult and can be generalised to enough other jobs that programming is going to be the last job automated.
People are spending time on it because such a capable AI doesn't exist and language design problems do. If you changed either of those things, you'd be making a strong argument.
I believe the modulo operator is implementation defined. As should be signed integer overflow, though it isn't.
One big problem about "undefined" is, when some platform go bananas over some stuff, the behaviour is standardised as undefined for all platforms, allowing the compilers to go bananas. Which they do, because every bit of undefined helps them perform some micro-optimization they couldn't do before.
I long for the day where the standard forces compilers to provide options such as -fwrap or -fno-strict on platforms that can reasonably support it.
Solar is better because it is decentralized whereas other sources we have today are controlled by single entities. This puts the power (hehe) back in the hands of the people.
Very awesome. However, the $ signs indicate financial stuff. It would be nice to have a different delimiter like the code blocks use back-tics. For example using back-ticks couple with single-quotes? It doesn't work since it clash with the code block just for example more legible:
`' x^2 + 4x + 4 '`
Remember: the whole idea behind markdown is it should be first and foremost* legible from plain-text. One idea behind typography is it shouldn't be noticed at all to the reader.
"$" ("$$" to be exact) is actually used in TeX to indicate opening and closing of statements/mathematical expressions. I think the author of this project is trying to conform to TeX syntax as much as possible so that TeX users like me would feel at home :).
Using $ and $$ is most legible for me. Having spent many years using TeX. Pandoc seems to do the same thing - I suspect for the same reasons. I usually patch this into markdown systems that I use. This is especially useful when you have have to talk about things like $x$ and $y$ and how $x^2+y^2=z^2$. In actual math documents, every third or forth word will be surrounded by $, it is almost like italics to mathematicians, and would be nice to be able to express it just as easily as italics.
I find it a lot more readable than the \( and \[ that LaTeX and other markdown systems use.
I believe gitbook uses just "$$", but this looses the distinction between inline and display math. (And there is a subtle difference between display math within a paragraph and display math between paragraphs, so you can't tell from context.)
I understand that some people will want to talk about money and having to escape $ could be incredibly annoying, but for actually writing math documents, this is kinda essential. So, to me, it feels like something that needs to be an optional add-on.
To the contrary, as someone who hasn't used TeX at all I find it somewhat hard to read the plain-text. The $$$ have more visual weight than the actual formulas.
Kind of interesting to see some alternatives and note the visual weight. The back-ticks are by far the best in my opinion. Too bad they are taken by the code blocks! Semicolons are nice though--kind of like a LISP comment.
MathJax by default doesn't use $ as a delimiter for inline equations, but instead uses \( and \) to avoid confusion with monetary quantities. Perhaps a similar thing could be done.
Off the top of my head, I feel that using that syntax would increase the chances that you would run into issues trying to differentiate between escaped characters and expressions, though it's an interesting recommendation (and if MathJax has been able to make it work, I'm sure it's feasible).
The inline math syntax is `x^2 + 4x +4`$ so the dollar sign is outside the backticks, but the formula is encapsulated in the code.
For a display block, they use the code fences with either latex or $ as a math toggle:
```latex
\psi = \frac{5 \phi}{\omega}
```
or
```$
\psi = \frac{5 \phi}{\omega}
```
At some point soon, I intend to implement this for my own use. It should be pretty easy using a markdown parser assuming one can do a lookahead/gobbler to the next character after the inline backticks.
For exactly that reason, we used $$(inline expression) and $$$(block expression) for the KaTeX delimiters in a web-based Markdown editor that I worked on.
Originally we used the "standard" single $ delimiter but would run into problems when someone's content included a string like, "...between $190 and $200...". It wasn't elegant, but saved a lot of complicated parsing to look for whitespace or other implicit indications of what the author was trying to do.
Alas, having no garbage collector at all is a blissful state of simplicity when performance matters. It's not very difficult to clean up one's objects manually in a well-thought-out codebase.
What about smart pointers? It gives the flexibility and benefits of garbage-collection without the performance degradation. It's not a silver bullet but it helps.
What I'm getting at is, Rust is the only modern language (in vogue at the moment) that does not use garbage collection. It would be nice if more languages didn't require a garbage collector but gave the option to use one.
If there are some more of such languages please chime in and provide a link to them! :)
A smart pointer is just a way to add custom behavior when it's created and destroyed. The question is what the smart pointer does. Usually the answer is update a reference count, which happens so frequently that it ends up being a lot more expensive (especially in bus bandwidth) than occasionally tracing live objects between long periods of useful work.
D has optional GC. However in hindsight it is perhaps not so smart. The problem is that it fractures the ecosystem: some libraries require GC and others don't. If you don't want to use it you have to rely only on non-GC libraries.
GC is certainly easier. However, it is important to understand the life-cycle of objects regardless. To be able to control them (if desired) is good because in many cases one would prefer to have that control instead of being forced to use garbage collector.
In an ideal world, garbage collector makes sense because we don't want to have to worry about life-cycle management when we are solving other problems. Memory management can indeed be an annoying implementation detail, but as it stands there are certainly limiting factors in hardware which are impacted by generic object-management for specific application domains.
Or a meticulous team of very high quality engineers with top notch documentation and design practices. Note that I do agree with you, however, because as you said:
I think it's more about the underlying machine as opposed to a particular language implemented on top of Von Neumann architecture. For instance, we are constrained by the underlying machine regardless of which higher-level language we use, functional or otherwise.
I started reading this book on functional languages and lambda calculus. It has some interesting notes on how to build language syntax on top of a purely-functional machine architecture (at least conceptually). Pretty interesting stuff:
Julian Bigelow, chief engineer of the Institute for Advanced Studies computer project (Von Neumann's first actual computer), stayed at the IAS after Von Neumann left for the AEC (and died in 1957) and believed the architecture was the limiting factor.
"The modern high speed computer, impressive as its performance is from the point of view of absolute accomplishment, is from the point of view of getting the available logical equipment adequately engaged in the computation, very inefficient indeed." The individual components, despite being capable of operating continuously at high speed, "are interconnected in such a way that on the average almost all of them are waiting for one (or a very few of their number) to act. The average duty cycle of each cell is scandalously low."
Julian Bigelow, quoted in George Dyson, "Turing's Cathedral", pg. 276
That's because they're universal, programmable computers. If you have a very specific, fixed task (e.g., matrix*vector multiplication), then you can build a chip in which a much higher number of transistors does useful work in each cycle. You won't be able to browse the web, watch a video, do your taxes, or program other computers with it, though.
I am naive when it comes to computation science, but doesn't the turing completeness of someting essentially mean you are not constrained? Perhaps constrained in efficiency, but not outcome.
As soon as you target an actual user doing stuff on an actual computer, there will always be constraints due to efficiency and performance. You might not run into them, because your program is sufficiently small and the hardware sufficiently fast, but they're always there.
In this age we need to think about things like voice control and 3D manipulation of data-structures and a dynamic view of the code.
We can truthfully keep designing 2D editors (and we will always most likely use them to some extent) but I believe it is more important to consider different UI paradigms altogether.
For instance, what about editing a living code environment? Game development is very immersive: you can manipulate a running environment and see results immediately. How can this be extended to other development tasks like server-side development?
What if, when you select a for loop from code fragment a 3D visualization of the programs data structures at that point is shown to the user. What if you can, instead of launching a debugger, run the debugger as you're writing the code and step forward and back and see these visualizations change?
We have all the technology. It's time to get to the next level.
Yes, but. Keep ripeness in mind. It's very easy to spend effort in this area "too soon". So shape projects with care.
I run a Vive on linux. Which required my own stack. Which took excessive time, and is limiting, but has some road-less-traveled benefit of altered constraints. So I've gone to talks, and done dev, wearing Vive with video passthrough-AR, with emacs and RDP, driven by an old laptop's integrated graphics. Yay. But think 1980's green CRT (on pentile, green is higher res). In retrospect, it was a sunk-cost project-management garden path.
There's been a lot of that. One theme of the last few years, has been people putting a lot of effort and creativity into solutions to VR tech constraints, only to have the constraints disappear on a time scale that has them wishing they had just waited, and used the time to work on something else. It's fine for patent trolls (do something useless to plunder future value), and somewhat ok for academics (create interesting toy), but otherwise a cause for caution.
So on the one hand, I suggest that if the center tenth of VR displays had twice their current resolution, everything else being current tech, we would already be starting on a "I can have so many screens! And they're 3D! And..." disruption in software development tooling. But that's still a year or two out. Pessimistically, perhaps even more, if VR market growth is slow.
In the mean time, what? Anything where the UI isn't the only bottleneck (work on the others). Or which can work in 2D, or in 3D-on-2D screen (prototype on multiple screens). Or is VR, but is resolution insensitive, and doesn't require a lot of "if I had waited 6 months, I wouldn't have had to roll my own" infrastructure. Or which sets you up interestingly for the transition (picture kite.com, not as a 2D sidebar, but generating part of your 3D environment). Or which can be interestingly demo spiked (for fun mostly?).
For example, I would love to be working on VR visual programming on top of a category theoretic type system. But there seems no part of that which isn't best left at least another year to ripen. Though maybe 2D interactive string diagrams might be a fun spike.
Very well put. I have to agree that these technologies are still blooming and there are uncertainties--especially in my mind with cost associated with buying devices. Hololense for example with a $3000.00 USD developer edition isn't exactly accessible to the general community! The next 10 years however will probably be pretty exciting in the AR/VR space.
What about voice-controlled programming though? I always thought it would be nice to voice-control my OS. Not specifically for a text-editor, but as a general interface to the OS. It would be nice to move these features out of the cloud and directly onto systems. But then again, a lot of companies (amazon, microsoft, apple) probably don't want to encourage reverse-engineering of their intellectual property. We definitely need open-source variants.
Better AI chips with lower power-consumption and optimization for these types of operations will hopefully usher in a new set of productivity-enhancing applications!
You might enjoy http://www.iquilezles.org/live/ where he live codes some ray-marching using some kind of opengl editor. It's not quite what you're talking about, since it's just running the opengl code, but you could imagine it going through some kind of compiler/visualizer pipeline like you're thinking about.
> We have all the technology.
> It's time to get to the next level.
Just because you can does not mean you should. Dictating code may be useful is you cannot use your hands, but that's about it. In all other cases there is no benefits of doing that.
Well consider if you use both dictation and typing simultaneously. Vi and kakoune are ergonomic because it requires minimum changes to hand positions. But if you added voice dictation like "toggle tab 2" or "toggle terminal" or "go next brace" etc, all while STILL typing my guess is efficiency would go up physical fatigue would go down.
Folks have created Dragon-based voiced-sound vocabulary for editor control. Voice strain is an issue, but you can load share with typing.
You can do Google voice recognition in browser/WebVR, but it's better at sentences than brief commands.
Another component, largely unexplored, is hand controller motion. The Vive's are highly sensitive in position and angle. Millimeterish. So imagine swype text (phone keyboard continuous sliding) with 6 DOF. Plus the touchpad (the buttons aren't something you'd want to use all the time). Keyboard+pad is mature tech. But wands-with-pads look potentially competitive, and (not yet available) hand-mounted finger tracking added to keyboard+pad might make for a smooth transition. Plus voice. The space of steep-learning-curve input UIs for professionals looks intriguing.
Tcl (upon which Expect is built) is under-appreciated too. It's a neat language how strings behave as a sort of polymorphic data structure (can be dicts, lists, etc). Under the hood they get optimized based on usage.
To me computer skills means two things: rate at which one can learn new technologies, and one's understanding the basic components that comprise of the machines.
Programming fits both of those categories: consider the number of programming languages in vogue at the moment. These languages however are composed of the same components which comprise the machine.
Programming languages are software, no differently than iMessage or Windows Live Mail. It's just another way to control the machine.
The problem isn't the language it's the developers.