Hacker Newsnew | past | comments | ask | show | jobs | submit | more oscargrouch's commentslogin

I guess its all coming down to mnemonics, aiding our memories and communicating.

Sure this is the "state of the art", but despite the fact that pure language notations might be even worse, i cant help to think that people thinking like the parent might find something even better.

Maybe something inspired by braille notation or something that is invented while trying to understand how our brain works (just speculating here) will be even more expressive.

I actually like seeing an adult be bothered by the fact that the same symbols that turn science more expressive are also the reason that there's a big ladder for newcomers to understand whats being expressed given its all very arbitrary (someone in the XVI century choose a random greek letter to represent X).

Imagine how much science would improve with more "brain power" being also able to try to solve some problems given there are less arbitrarity..


The problem is: who's gonna use it?

Sure, it's good to have more options but, the deploy base of browsers like Chrome are massive and IE6 only lost its position by being careless and stop evolving (and lets not forget it took years), which is something that is not happening with Chrome.

Just to make clear, this is not specifically directed to you or your project, which is a great thing to do independently of the outcome. I'm saying this more as in a general perspective as we are pretty much sitted on the state-of-the-art of browser engines and its hard to see something else taking over except for a "game reset" scenario (outside of the web).


The point is that a lot of sites out there don't need anything more than the web tech of 2 decades ago. With more people being aware of that and using browsers like this, the Google monopoly may weaken. I make it a point to complain whenever some site I need to use changes in the direction of Google's desires and becomes less browser-agnostic. It has had mixed results, but I suspect if a company is losing customers because its web devs are Google-loving trendchasers, they'll take notice.


> The point is that a lot of sites out there don't need anything more than the web tech of 2 decades ago. With more people being aware of that and using browsers like this, the Google monopoly may weaken.

I always thought this was a way to get out of this, that agrees with your vision. First you'll need a powerful force to take back control on defining web standards.

This new web standard would be a very simplistic one, making possible for two or three (or even one) people to create new browsers on top of it.

But this movement would need to be so strong, that it would be possible to make a dent and start to lead a new way. And this is the one of the most hardest part. People on this movement would need to be prepared to fight for at least 10 years without loosing its faith until the killer app of this movement should be able to be at a market share bigger than Safari and taking over the Firefox position.

This movement could use the hype akin to Rust folks to navigate the harder first years. Its a possibility but it's hard to become possible. Another thing to notice is that the Firefox position is getting weaker, but the most probable candidate to take over is the chrome-based Brave.

So unless a "black-swan" event happen it's hard to see a big change into this in the coming years. BTW it's even more likely that the browser standards get stronger by completely taking over the mobile applications..


Gemini is one of such simplistic standards.


I think Gemini will have people following it and developing for it, but i think it could have been a little more ambitious and also took tech history a little bit more serious as to not take too much out of what is actually a improvement done on the web that could be a soft fork of the current web with a lot less things, but not as simplistic as Gemini is.

So i don't think Gemini is "it" yet, but at least i liked the audacity and the way of thinking, its on the right direction and actually it should be people like Tim-berners-lee that should actually be leading this, but unfortunately they are lost..


I think browsers are comparable to operating systems, and if we look at the richness of the Linux ecosystem, I'd say it can't hurt to have a few more browsers and browser engines. For example, most users wouldn't see the utility of Arch linux. And yet Arch linux is the backbone of Docker. Perhaps a new browser engine can fill a similar niche, targeting low resource usage and light browsing


There already are Ultralig.ht, Netsurf, Goanna - all lightweight web engines.


Having a lightweight embeddable browser engine (or even html parser) to use in your projects to display hypertext would be nice.


There are already: Ultralight, Sciter, Tauri, NeutralinoJS, DeskGap.


I've have a answer to this same thread specifying something i'm working on, so in case you are curious, you can have more info by reading the details i gave in the other answer, and the way to solve this problem is through indirection.

You don't expose your data layer directly to the consumer, you expose an api that will resolve to one, two or several databases from one or many peers. The indirection allow you to define your rules and use your data layer in a way that fits your application goals in the best way possible.

So whats is immutable is your application, api and initial data, which you can mutate at later stages through other torrents or by consuming other api's from other peers and mutating your initial database state.

The problem is, the current browser is not meant for this, Javascript is not meant for bigger and complex applications (of course you can do it, but..)


I've built something akin to this, but with the idea of applications distributing API`s where the databases are also torrented but working behind the api's, so that developers can build basically anything.

In my case i've implemented a new "browser" based on Chrome that allows this to work, without having to resort to browser-only infrastructure (for instance applications can dodge Javascript and also call RPC api's from other applications directly).

The applications and the applications data are distributed over torrent and managed to work together in the same environment as a flock where one app can consume its own apis and also the api's from others.

    service Search {
      rpc doQuery(string query) => (array<string> result) // the access to the sqlite db from torrents is encapsulated here
    }
The beauty of this design is that it can also be re-scheduled and have the same request routed to other peers

---


With COVID and the rise of remote working there are a whole lot of companies where the developers are the asset, the commodity, where those companies are just the middle-man receiving for the work the developers do and getting the big part of the cut in the process.

Those are mostly the "sweat-shops" that hire developers that can work for less for several reasons. I see that in this particular case this solution might be very welcome to assess new hires.

But for more high-profile jobs and companies, the company that eventually follow this path will probably end with low quality hires, unless of course it just create crud's anyway.

Because this is a self-fulfilling prophecy in the end. Once the candidates know what they will face, they will train until they get good in that game, which most of the time doesn't mirror the qualities required for the job.


Is RT Linux a good (or at least a good enough) option for realtime OS's compared to others who where designed as RT kernels from the start?


You really have to define 'good enough'. E.g. Can handle 1000 Hz interrupt rate, and the time from firing of the interrupt timer to the time your first instruction executes is xxx microseconds. (this is important if you are doing e.g. velocity estimates from position information -- you need accurate 'dt'; therefore this time should be minimized. There are other ways to deal with jitter, but minimizing this is best.)

I turned BSD 4.3 unix into RT on a VAX 780 by using an external KW-11P clock on highest priority interrupt, have the RT code run in the kernel, and otherwise, timesharing was still going while we were running the robot -- although it was pretty sloooow. There was a big memory pool in kernel space that stored robot variables, so when the robot fell over, we would press a button which would 'freeze' the circular buffer, and then the user-space code would do an ioctl() to receive a copy of that memory pool (it was too hard to share the same pool between kernel / user space -- or maybe I was too lazy, and the solution found was 'good enough' ;-) )

You can see this in action here: https://youtu.be/mG_ZKXo6Rlg?t=34 yes, I'm sitting 'driving' the bot.


RedHat has a realtime kernel.

it is available from the mirrors. Currently also available from CentOS.

last time i checked, it would not load the NVIDIA driver. YMMV.

if you are in the < 120hz range, then it will probably work fine for most non safety critical times.


>Is RT Linux a good (or at least a good enough) option for realtime OS's

Soft realtime, yes. It does keep scheduling latency under control, enabling e.g. pro audio with low latency (very small buffer). This is unlike without PREEMPT_RT, where with buffers under 20ms XRUNs happen frequently.

Hard realtime, no. It can't guarantee a thing, as the kernel is complicated (has millions of lines of code) and thus unpredictable.

Look at seL4 instead if your deadlines are hard. It has been designed for realtime, and comes with formal proofs of worst case execution time.


There are options. None with even close the mindshare that linux has.


>None with even close the mindshare that linux has.

When you have a hammer (Linux) in your hand, every problem looks like a nail.

seL4 can do hard realtime, with formal proofs of worst case.

Linux can't, and won't ever be able to, due to its complexity (Millions of LoCs).


Foe anyone interested, there was a port of Webkit/Blink where the web objects retained by javascript are garbage collected outside of V8, using blink GC to destroy those objects.

The way it works its through smart pointers, where for instance you say how the reference to that object is retained according to the object that it references.

The good side of this is that other programming languages besides Javascript can deal with blink objects the same way Javascript does (This feature was called 'oilpan').

It's this feature that makes possible for my project to have web-based applications in Swift, for instance, and would allow to plugin the Lua VM or Jit in the same way and still be able to be a first-class citizen of the webkit API as Javascript is.


Is this still possible in Blink? Or was this feature abandoned?


It's there. For instance if you want to retain a reference to a 'WebFrame' for instance, you can create a wrapper object which is the one you directly control the lifetime and have a 'Persistent<WebFrame>' as a property of your wrapper to hold the blink reference.

As long the smart pointer is not destroyed (with the destruction of your wrapper or holder) the web frame object is guaranteed to retain a reference, making the object alive.

Otherwise you can retain Weak<> and Member<> smart pointers for things that your object dont own, and of course, in the case of Weak<> you can expect it to be collected in the next scheduled GC job (or anytime). And in the case of Member<> is not a strong reference to the object as in Persistent<> so you dont own the object's lifetime, but you retain a reference so the object should not go away while you hold it.

To see how serious is the commitment to this scheme, you can just explore the Blink codebase to see that this scheme is actually used internally as a way to control lifetime between objects, and not just as a API thing to be used from consumer projects (unlike V8 which vends a different API for consumers of the VM from the one it uses internally).


> I feel like this would enable a more sensible choice:

I have a working solution for something like this that is based on Chrome

https://github.com/mumba-org/mumba

The applications developers publish native, Swift applications, where Swift have the full access to the Webkit/Blink api the same way Javascript does and even more with the patterns from the Chromium renderer (for instance, access to lifetime events, OnDocumentLoaded, OnVisible, OnFirstPaint, etc..).

The offline first comes from the fact that every application runs first as a daemon which provide a gRPC api to the world of the services it offers, where the web-UI application process might also consume this same api from its daemon -process manager or from other applications.

Note that the daemon process who is always running give access to "push messages", so even if no UI app are running things can still work, like syncing into the DB, or even launching a new UI application given some event.. This service process also is pinged back by the web-UI process for every event, so the daemon process can also act when theres a need (when everything is loaded on the UI, do this..)

Also about the article, note that with this solution you will code in pure Swift (not WASM) just like the article is pointing out the web applications that can be built without any javascript.

Other languages like Rust, C++, Python can be added giving the Swift applications are talking to the runtime through a C api. (And i could use some help from other developers who also want this in other languages)

If you want to ship your applications with this, you will get a much better environment than Electron, using much less resources, as things are being shared to every application, and the cool part is the applications can consume one another services (for instance a Mailer application) forming a network, and depending one-another (the magic here is that they are all centrally managed by a core process that is always running)


I can relate to a lot of things he said, but one thing i used to pass through that was to dedicate myself to a bigger goal.

I've started a side project that took me some years (at least 4), a lot of perseverance, because there are a bunch of things that are actually quite boring to implement. (You actually need to create some thick skin to overcome those boring tasks that sometimes can take weeks)

Lost basically all of my friends in the process (i'm a social person actually), with the exception of a few who kind of understood it.

No weekends, working from 12 to 14 hours a day, having to mix and deal with already bigger and established codebases.. its the loneliest job in the world and you need a lot of mental balance and good mood to go through it.

But some special spice that actually helped me going on was that there is a social goal in the project and the idea is to provide a way out of the FAANG centralized and controlled world.

People are mostly unaware of where we are heading it as long they have cool gadgets to play with, and governments might only act when its to late. So while everybody is enchanted with this brave new world, the way the things are heading is actually a pretty dangerous one (and i really hope to be wrong on this).

This was a very special reason that keep me going even in the hardest parts (as for instance when i lost my dad to COVID).

Its not really finished yet as it needs some polish, but giving it needs just a little love, at least now i'm able to do interviews for steady jobs (specially now with more remote ones) and get out of this life of doing freelancing work which sometimes is not very fun.

Anyway, my point being, that maybe there's a need for something else, as in my experience, just intellectual curiosity wont do the trick.. (I`ve had some of those too)

For instance even with burnout (which i kind of postponed to the last moment), i'm motivated to go through it all to see this project have at least the (little) recognition it deserves and i don't care to have recognition myself nor i've done it to become rich, as there were much better and easy to implement ideas to that goal, but i just want to see it "on track", having a way to evolve and become a viable alternative road to another kind of future giving us back the power that is actually ours in the first place.


You need to read lean startup my friend.it will save you some years.


Just as a matter of research giving your background:

Would you like/consider to use C++ but with the web as the UI/presentation layer?

No javascript, no WebAssembly. Just pure C++ talking nativelly with the web layer..

Im working on something where applications can use the web as api, just as ( and even more than) Javascript can, with other spices added to the whole thing.. and im considering to jump to C++ as the second SDK (the first one already implemented being Swift)

Just to understand, what someone like you would think about this, if you dont mind me asking for this..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: