I'm in your boat. I tried out TikTok out a few times, including making a new account, but it never showed me good content. I had maybe one or two longer sessions, but never felt the need to go back, like I (unfortunately) do with Reddit or Youtube. I could never understand why it was so popular, but maybe I'm just a curmudgeon.
I think that's part of why it's always been a little bit of a head scratcher for me — I didn't really go into it curmudgeonly, I was genuinely interested in it, people seemed to like it, and I was interested in something new. It just never worked out at all for me.
I even had people telling me in all seriousness "I must secretly like the content", as in the algorithm knows better than I do what I like. Which is kind of a weird and maybe even disturbing idea to buy into if you think about it.
I was told to keep at it, which I did. I'd put aside for a long time, go back to it, repeat the process over and over again. Eventually I just gave up. I always felt like it was targeting some specific demographic by default and never got out of that algorithmic optimization spot for me.
>It gives me hope that teenagers are watching his videos and becoming inspired to go into infrastructure.
Sadly, that was me ~10 years ago, but the lure of FAANG money was too strong and I went into EE/CS after 1 year as a civil engineering major. I wonder if one day we will really start feeling the affects of this talent reallocation, and civil engineering will become a higher paying profession.
I think they are correct, do you have a source? From my knowledge the only other components are the fully connected networks which are not big contributors.
It's quadratic, because of the dot product in the attention mechanism.
You can use K-V Caching to get rid of a lot of the quadratic runtime that comes from redundant matrix multiplications, but after you have cached everything, you still need to calculate the dot product k_i * q_j with i,j being index of the tokens. With n tokens, you will get O(n*n).
But you have to remember that this is only n^2 multiplications. It's not exactly the end of the world at context sizes of 32k, for example. It only gets nasty in the hundred thousands to millions.
For small values of N, the linear terms of the transformer dominate. At the end of the day, a double layer of 764*2048 is still north of 3.1 MM flops/token/layer.
I hardly think it's fair to say that they were 'taking advantage'. If Google says unlimited, should the typical person really expect that to be taken away? That's a really bad look for Google. "If we offer something that's good value to you, expect it to be taken away suddenly in the future." Those are not the actions of a company I would want to rely on.
With modern connected devices, I absolutely do expect for features (or even total functionality) to be removed on a whim by the manufacturer. Cloud services are no different.
Lesson being: do not rely on devices or services that rely on a third party. They absolutely will screw you; it’s only a matter of time.
If you do not believe this, then I would say that you have not been around this industry long enough. There may be rare exceptions, but this should be your rule if you care about the longevity of your software and data.
Of course it is not acceptable. But I imagine every company out there that has skin in this game would boldly taunt in reply: “whachha gunna do ‘bout it?”
Sure, we could boycott. How often is that ever effective in this modern age? Is it ever, because I don’t think I ever heard of such outside of history books?
Our lawmakers are corrupt and wholly in the pocket of those who would stand to benefit from perpetuating this shameful status quo. From my perspective, there ain’t damn thing we can do to fix it (or a thousand other problems), unless we are ready to start holding them accountable. I think that will take putting some of their heads on pikes.
Most unlimited services operate like all you can eat buffets. There is some secondary constraint that keeps usage bounded. I.e. the person's ability to eat food.
> unlimited, should the typical person really expect that to be taken away?
Absolutely. The word "unlimited" has been misused by so many companies (especially ISP and mobile) that anyone who has their eyes open should expect it to mean limited.
Also if there is a deal that is exceeding better than other options, don't be surprised when the rules change later.
I never really got how proofs are supposed to solve this issue. I think that would just move the bugs from the code into the proof definition. Your code may do what the proof says, but how do you know what the proof says is what you actually want to happen?
A formal spec isn't just ordinary source-code by another name, it's at a quite different level of abstraction, and (hopefully) it will be proven that its invariants always hold. (This is a separate step from proving that the model corresponds to the ultimate deliverable of the formal development process, be that source-code or binary.)
Bugs in the formal spec aren't impossible, but use of formal methods doesn't prevent you from doing acceptance testing as well. In practice, there's a whole methodology at work, not just blind trust in the formal spec.
Software developed using formal methods is generally assured to be free of runtime errors at the level of the target language (divide-by-zero, dereferencing NULL, out-of-bounds array access, etc). This is a pretty significant advantage, and applies even if there's a bug in the spec.
> A formal spec isn't just ordinary source-code by another name, it's at a quite different level of abstraction
This is the fallacy people have when thinking they can "prove" anything useful with formal systems. Code is _already_ a kind formal specification of program behavior. For example `printf("Hello world");` is a specification of a program that prints hello world. And we already have an abundance of tooling for applying all kind of abstractions imaginable to code. Any success at "proving" correctness using formal methods can probably be transformed into a way to write programs that ensure correctness. For example, Rust has pretty much done so for a large class of bugs prevalent in C/C++.
The mathematician's wet dream of applying "mathematical proof" on computer code will not work. That said, the approach of inventing better abstractions and making it hard if not impossible for the programmer to write the wrong thing (as in Rust) is likely the way forward. I'd argue the Rust approach is in a very real way equivalent to a formal specification of program behavior that ensures the program does not have the various bugs that plagues C/C++.
Of course, as long as the programming language is Turing Complete you can't make it impossible for the programmer to mistakenly write something they didn't intend. No amount of formalism can prevent a programmer from writing `printf("hello word")` when they intended "hello world". Computers _already_ "do what I say", and "do what I mean" is impossible unless people invent a way for minds to telepathically transmit their intentions (by this point you'd have to wonder whether the intention is the conscious one or the subconscious ones).
> thinking they can "prove" anything useful with formal systems
As I already said in my reply to xmprt, formal methods have been used successfully in developing life-critical code, although it remains a tiny niche. (It's a lot of work, so it's only worth it for that kind of code.) Google should turn up some examples.
> Code is _already_ a kind formal specification of program behavior.
Not really. Few languages even have an unambiguous language-definition spec. The behaviour of C code may vary between different standards-compliant compilers/platforms, for example.
The SPARK Ada language, on the other hand, is unambiguous and is amenable to formal reasoning. That's by careful design, and it's pretty unique. It's also an extremely minimal language.
> `printf("Hello world");` is a specification of a program that prints hello world
There's more to the story even here. Reasoning precisely about printf isn't as trivial as it appears. It will attempt to print Hello world in a character-encoding determined by the compiler/platform, not by the C standard. It will fail if the stdout pipe is closed or if it runs into other trouble. Even a printf call has plenty of complexity we tend to just ignore in day to day programming, see https://www.gnu.org/ghm/2011/paris/slides/jim-meyering-goodb...
> Any success at "proving" correctness using formal methods can probably be transformed into a way to write programs that ensure correctness
You've roughly described SPARK Ada's higher 'assurance levels', where each function and procedure has not only an ordinary body, written in SPARK Ada, but also a formal specification.
SPARK is pretty challenging to use, and there can be practical limitations on what properties can be proved with today's provers, but still, it is already a reality.
> Rust has pretty much done so for a large class of bugs prevalent in C/C++
Most modern languages improve upon the appalling lack of safety in C and C++. You're right that Rust (in particular the Safe Rust subset) does a much better job than most, and is showing a lot of success in its safety features. Programs written in Safe Rust don't have memory safety bugs, which is a tremendous improvement on C and C++, and it manages this without a garbage collector. Rust doesn't really lend itself to formal reasoning though, it doesn't even have a proper language spec.
> The mathematician's wet dream of applying "mathematical proof" on computer code will not work
Again, formal methods aren't hypothetical.
> I'd argue the Rust approach is in a very real way equivalent to a formal specification of program behavior that ensures the program does not have the various bugs that plagues C/C++.
It is not. Safe languages offer rock-solid guarantees that certain kinds of bugs can't occur, yes, and that's very powerful, but is not equivalent to full formal verification.
It's great to eliminate whole classes of bugs relating to initialization, concurrency, types, and object lifetime. That doesn't verify the specific behaviour of the program, though.
> No amount of formalism can prevent a programmer from writing `printf("hello word")` when they intended "hello world"
That comes down to the question of how do you get the model right? See the first PDF I linked above. The software development process won't blindly trust the model. Bugs in the model are possible but it seems like in practice it's uncommon for them to go unnoticed for long, and they are not a showstopper for using formal methods to develop ultra-low-defect software in practice.
> "do what I mean" is impossible unless people invent a way for minds to telepathically transmit their intention
It's not clear what your point is here. No software development methodology can operate without a team that understands the requirements, and has the necessary contact with the requirements-setting customer, and domain experts, etc.
I suggest taking a look at both the PDFs I linked above, by way of an introduction to what formal methods are and how they can be used. (The Formal methods article on Wikipedia is regrettably rather dry.)
I think the reason that formal proofs haven't really caught on is because it's just adding more complexity and stuff to maintain. The list of things that need to be maintained just keeps growing: code, tests, deployment tooling, configs, environments, etc. And now add a formal proof onto that. If the user changes their requirements then the proof needs to change. A lot of code changes will probably necessitate a proof change as well. And it doesn't even eliminate bugs because the formal proof could include a bug too. I suppose it could help in trivial cases like sanity checking that a value isn't null or that a lock is only held by a single thread but it seems like a lot of those checks are already integrated in build tooling in one way or another.
Yes, with the current state of the art, adopting formal methods means adopting a radically different approach to software development. For 'rapid application development' work, it isn't going to be a good choice. It's only a real consideration if you're serious about developing ultra-low-defect software (to use a term from the AdaCore folks).
> it doesn't even eliminate bugs because the formal proof could include a bug too
This is rather dismissive. Formal methods have been successfully used in various life-critical software systems, such as medical equipment and avionics.
As I said above, formal methods can eliminate all 'runtime errors' (like out-of-bounds array access), and there's a lot of power in formally guaranteeing that the model's invariants are never broken.
> I suppose it could help in trivial cases like sanity checking that a value isn't null or that a lock is only held by a single thread
No, this doesn't accurately reflect how formal methods work. I suggest taking a look at the PDFs I linked above. For one thing, formal modelling is not done using a programming language.
You mix up development problem with computational problem.
If you can't use formal proof just because the user can't be arsed to wait where it is supposed to be necessary, then the software project conception is simply not well designed.
I don’t think that would necessarily favor small artists. That would just favor artists who are listened to by people who don’t use Spotify a lot.
Right now someone who only streams a few songs gets a very small “vote” (assuming pay is per stream). That would make it so that everyone had the same “voting” power. But I doubt there’s much correlation between people who use Spotify less and small artists. In fact that’s probably a negative correlation if anything, and this could end up hurting local artists.
I'm not sure about spotify, but I have seen this problem with Netflix: I'm an adult paying full for my Netflix subscription and for that price I would love to have a few great and expensive movies every month for my age group.
But what I usually see is lots of movies made for teens who are binge watching Netflix even though they are paying the same. Netflix has some public presentations on their algorithm and from the presentations it looks like they are optimizing for watches without weighting by subscription revenue per watch.
Another way to look at it is that you are now paying less for the base ticket which used to include those things. This allows consumers more choice in what part of the experience they want to pay for.
If someone wants to build a computer and they don't need a graphics card, it's better to give them the option not to get one for less money, or to get one for more money.
> Another way to look at it is that you are now paying less for the base ticket which used to include those things.
If that were true, sure. But I somehow doubt that's true. I don't have any data to verify this (nor do I know how one would get it), but I would expect that the inflation-adjusted cost of a ticket 20 years ago (when you weren't yet getting nickel and dimed for everything) is comparable to what you pay today for just the base price.
Airline prices are much cheaper than they used to be. Imo airfare is the crowning achievement of capitalism and proof that efficient, free markets are unbeatable economic engines. Airlines barely make any money at all off flights now, they’re profit engine is the credit card they push.
> Airline prices are much cheaper than they used to be.
The base price listed, but by the time you add all the fees and fees on fees, not so sure they're any cheaper. Last time I flew a few months ago the initial flight price of ~$600 on kayak search became ~$1500 by the time I had the final price 5 minutes of clicking later. I ended up buying the ~$800 flight that became ~$1300 after all the add-on fees.
The real problem is that it makes comparison shopping incredibly difficult, since each airlines packages their dozen fees differently. Years ago I could list all the prices from A to B and pick the best time and price from a single screen and that's that, since every price was the final price. It was so easy.
Now I have to go through each flight option individually going through the whole reservation sequence to finally get a price. It can take many hours, what used to be a simple search. If you've done this I'm sure you know the add-on fees are different for each flight option! I'm sure most people just give up and buy whichever rather than waste the time, which is exactly what the airlines want. I'm obsessive enough that I'll spend the time, but what a waste.
> Imo airfare is the crowning achievement of capitalism
Capitalism works optimally when consumers can be fully informed and comparison shop with high efficiency. The airline pricing these days is pretty much the exact opposite of that, being extremely opaque by design to prevent comparison shopping.
I guess the unbundling of basics is the way the airlines flaunt this law.
I just looked up flights from SFO to EWR on kayak. Cheapest one was $426.
But when I click through to the united site, the fine print says that even for a carryon bag there is a +$25 charge. So even if you don't check any bags (which of course is another extra fee), there is a $25 fee for the carryon. So the real price is $426+$25 at a bare minimum.
But let's be honest, that is a basic economy seat with no knee room for an adult and no checked bag. So unless you are a child traveling for two days, you need to pay extra for both legroom and the checked bag. The bag is another +$70. I don't care to spend the time to go through the whole process to get the final price but clearly it's way more than $426.
If you're comparison shopping, now repeat this hassle for every single flight on the search result. You will (and I do) spend many wasted hours on this infuriating nonsense.
I only bring a backpack on flights which is always free. Most of my traveling is short term or between places that I have clothes at though, only real exception is work stuff and they pay for the bag fee.
Is that an obvious association to the relavent audience? Mixing the French “suisse” with the German abbreviated “ing” is a little odd to my (french) eyes. Or is suisse also common in Swiss German?
IDK how representative I am but as a German "Suisse" is clearly referring to Switzerland (maybe because it's in some Swiss brand names, e.g. Credit Suisse?) and my two associations with "ing" are ING direct banking and industrial engineering (outside the Bachelor/Master system the degree title prefix for a professional engineer was/is commonly indicated as "Dipl. Ing.").
But I wouldn't overthink this. Someone likely pitched this as a clever marketing campaign to show "technological leadership" (the news section has an article dedicated to the site being one of the first .ing domains as if this is a meaningful technological achievement) and got funding for it. That doesn't mean the site will be there or be maintained a few years from now.
Given that French cantons don't speak German and German cantons would rather speak English than be caught speaking French, no, I doubt this linguistic concoction is obvious to either.
Might work in Fribourg, which is bilingual, but that's still a stretch.