Hacker Newsnew | past | comments | ask | show | jobs | submit | heyitsme's commentslogin

I was just a kid in the 80s, but indeed remember watching "Sonda" - it was fantastic! Going through some episodes now on youtube, does bring back memories. There was another show in the late 80s called "Bliżej świata" (English: "Closer to the World"), which would run for an hour or so on a Sunday night and show snippets from western tv shows. The contrast to the (usually) very bleak communist Polish TV, was striking. For me, back then, watching both of these shows felt like looking through a window into a different world.


Wow!... if this is true (little reason to think otherwise), then it's completely disheartening. The effort to be honest and upfront with this person would be minimal, and realistically would not even have to change anything from Microsoft's side; after all they are free to develop whatever tools they want.


It also seems broken on linux (i'm running vim 8.2 on ubuntu 18.04), sadly. Lots of github threads about it.


I have a computer on my network that runs an nfs server and hosts media, then use kodi on nvidia shields (or other computers) to access it. It works extremely well... as in not even a single issue for years now. I should stress that this is at the level of just serving files, as I don't show any metadata associated with the media (as it's mainly personal home movies, etc).


How do you authenticate to the nfs server? Did you setup kerberos ?

I have always been reluctant to go that path, it felt to complex for my home use case. I generally use ftp instead for this reason, but I guess it's not as efficient.


I share my media on an Ubuntu server via NFS - all I've done is set the shared media library as read-only for the devices it's shared with, and specify the (local, reserved) IP it should be sent to. The only authentication is the fact that it's LAN-only, and being sent to the right (local) IP

Kerberos would be far too complex to be warranted here - all that's needed is nfs-kernel-server and /etc/exports on the server side.

Granted, my network is reasonably locked-down (MAC address filtering, (reserved) IP addresses available matching the number of devices on the network), but security beyond that has never really crossed my mind.


I don't password protect the media files on the nfs server - they are visible password-free from any machine on my local network. Of course if someone has sensitive material and/or the local network is shared, say, then some extra setup is needed.

EDIT: another common use-case for me is basically grabbing lots of youtube videos/playlists via youtube-dl, which then lets me watch them on anything and everything than can run kodi commercial-free without jumping through hoops (i.e. browser addons or sideloading third party youtube apps, etc)


Just to add what others said, could also run it on android tables. Sadly, the linux-running tablet options aver very limited.


could you give an example of an "non-trivial" thing you're talking about (i.e a few lines of code)? And how you solve it with "Bokeh"? I agree that matplotlib's functional interface is somewhat messy, but that is somewhat historical, and stems from wanting to mimic how matlab works. I personally find matplotlib's OO interface both very powerful, and somewhat intuitive, once one gets a hang of it, hence I'm curious of specific cases where it fails for you.


See my other post in this comment chain. Matplotlib is not good with handling large (e.g. more than 100K) datapoints in 2D or 3D if you want interactivity or animations. Bokeh handles it like a champ!


Aren't roku devices notorious for gathering data on how you use their hardware and what you watch? In fact as far as I know you can't even use their TVs without actually creating an account with them. If this is still true, it's completely NUTS!


When you set up a Roku TV, you have the option to never connect to the Internet and use it as a dumb TV. In that mode, no WiFi or Ethernet connection is active and there's no connection to a Roku account.

If you've already connected the TV, you can factory reset it to an unconnected state.


Roku does like to log all your actions, but just use something like pi-hole and it's nicely blocked.


You're placing too much trust into the effectiveness of pi-hole and its associated filter lists. Here are some failure modes I can think of:

* using fallback hardcoded IPs when DNS fails

* using DoH so it's impossible to tamper with the response

* using the same domain for spying as other critical functions

* new domains might not show up on the filter lists right away, and if the TV keeps a backlog of failed requests, all your viewing history might be uploaded when that happens


does that mean that if one starts a new julia kernel, and imports a package, something different happens each time they do that? if not, then it would seem one could at least save on those super long imports. Reading about this a bit more, it almost seems like whatever PackageCompiler.jl is doing could be automated and baked into the the core julia executable with simple options/flags.


There will likely be more caching in the future, but it's a hard problem and for now, they're working on lower hanging fruit to speed up compile times.

> Reading about this a bit more, it almost seems like whatever PackageCompiler.jl is doing could be automated and baked into the the core julia executable with simple options/flags.

Not really, no. Fundamentally, PackageCompiler is building a monolithic executable with your desired packages baked into it. Every time you want to add a new method to compile, you need to rebuild the whole thing and you cause it to be larger on the disk and slower to start up.

Even if we could quickly cache methods, I have hundreds of Julia packages installed locally on my machine. and regularly call methods with exotic signatures. If I baked every method I ever compiled into my sysimage, it'd probably be hundreds of terabytes in size at least. You have to remember that every time I call a function f on arguments x, y, and z, I need to compile a new method for each distinct signature

    f(::typeof(x), ::typeof(y), ::typeof(z))
There are more Julia function signatures that it's possible to create and compile from just Base functions and types than there are atoms in the universe.

Of course, it's possible to do more caching and faster than we currently do, but I just want to emphasize that it's a hard problem.


People are being downvoted to oblivion for mentioning this, but for some of us coming from different languages, it definitely takes some time to get used to things. I'm new to julia, so am probably more ignorant about this than most... but importing two packages such as DifferentialEquations and Plots, on my very modern machine takes ~30seconds (this is after they've been "precompiled"). I'm curious, assuming I haven't installed anything new, or haven't changed my installation, why does julia not cache this complied binary somewhere? It seems like this long step has to happen on each new-kernel import, but perhaps it could be avoided? Having an option flag that would cause julia to redo the compilation (because the user has, say, changed something about their installtion), seems like a simple solution.

What am I missing?


> but importing two packages such as DifferentialEquations and Plots, on my very modern machine takes ~30seconds (this is after they've been "precompiled").

FWIW, it is quite significantly improved in the upcoming 1.6 release.

1.5:

    julia> @time using Plots
      8.859293 seconds (15.99 M allocations: 913.783 MiB, 3.86% gc time)
    julia> @time using DifferentialEquations
     25.394033 seconds (58.78 M allocations: 3.222 GiB, 3.96% gc time)
latest master:

    julia> @time using Plots
      3.957724 seconds (7.67 M allocations: 537.424 MiB, 5.50% gc time)
    julia> @time using DifferentialEquations
      9.708388 seconds (23.08 M allocations: 1.537 GiB, 6.20% gc time)


There will be more caching in the future, but until then it sounds like you'd like PackageCompiler.jl (https://github.com/JuliaLang/PackageCompiler.jl) to build DiffEq and Plots into your sysimage.


As an alternative, don't import all of DifferentialEquations, just import the sublibraries you need. For ODEs usually just DiffEqBase and OrdinaryDiffEq are sufficient.


I see noone has answered your technical question yet, so I'll give (one) reason why this is difficult in the land of Julia.

A fundamental idea in Julia is Multiple Dispatch; methods are extended many times with different types, and common "verbs" (such as `open()`, `write()`, etc...) are used to operate upon a wide variety of objects (files, sockets, plot handles, sound devices, etc...). Exactly which pieces of code gets called when you call `open(foo)` depends not only on the type of `foo`, but also on what methods have been defined for `open()`.

As a concrete example, the deep learning library `Flux.jl` has code within it that checks to see if CUDA packages are already loaded or not within it. If they have been loaded, then `Flux.jl` will load its GPU support libraries as well. This means that:

``` using CUDAnative using Flux ```

and

``` using Flux using CUDAnative ```

Result in different environments, and hence, different code being run. It is unfeasible to precache all packages (you would, in the worst case, end up with a combinatorial explosion of packages that is `N!` in number of distinct package-package version tuples) and intelligent ways of thinning that down would quickly become untenable due to the highly dynamic nature of Julia.

Really what this is working around is the lack of support in the Julia package ecosystem for a way for packages to coordinate behavior; with stronger systems in place for a package environment to say "Hey, this is a GPU-using environment, and all packages should turn on GPU support!" there wouldn't be a need for this kind of dynamic behavior.

This is an example of something that happens all the time in the Julia world; when we lack a proper structure, users hack their way out of it with crazy dynamic code that does things that the language supports. We don't want to take that dynamic power away because that's where our edge over many other languages comes from, but it does make it challenging for the compiler to support everything in as low-latency a manner as a less-dynamic language would allow.

There are plenty of possible workflow improvements, of course, such as tighter PackageCompiler.jl and Revise.jl integration that would allow you to bake system images per-environment while still picking up some code changes, etc... but the penalty paid is pretty high; right now, building a system image takes many minutes, and it would be done for any kind of code change. There is compiler work underway to allow for more incremental system image compilation as well, but as with many things on our todo list, it must find its way into the compiler team's pipeline alongside all of the other cool things that are being worked on.

As mentioned in other comments, even without changing the workflow of Julia projects at all, incredible strides have been made over the last few releases, but I wanted to give you an idea of why certain things are harder than they might seem, when coming from other languages. As an aside, `Plots.jl` and `DifferentialEquations.jl` are two notoriously slow-to-load packages, beaten out only by other behemoth packages such as `Flux.jl` or `Turing.jl`.


I have little time to play games these days, but I do that. Have two machines, both always on, one of them a windows one for games, the other runs linux for everything else. I use a usb switch to, on-the-fly, change all the relevant devices (mouse, keyboard, headphone dac/amp, etc) and adjust the monitor input to use one or the other. It works extremely well, and takes ~1-2 seconds to switch.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: