I had never used an Apple product before the company which I joined recently gave me a MacBook Pro. I am really surprised how bad the product quality is. The calendar notification is very random. Sometimes it fires, sometimes it does not. I have missed couple of meetings because notification popped after the meeting was over. Similarly the keyboard shortcut is random. Sometimes it opens the app, sometimes it does not.
The laptop also gets very hot if you are not sitting in A/C. Not sure if it is this specific laptop or it is a general issue
So do people just buy these to look cool? I tried using a Macbook many times, but often got frustrated and went back to my good old Linux laptop for development. Doesn't look quite as slick, but certainly gets the job done.
I develop on this thing. It is running a great Unix os. I can't stand desktop Linux. The hardware quality was the best with a wide margin before the latest gen. Battery life is also great. I like them for development work when they are stable.
A lot of people are also really invested into the ecosystem. My entire photo collection is on iCloud. I use an iPhone. I can copy paste between my computer and phone. My Apple watch unlocks the computer when I'm near... List goes on.
But now I feel like Apple is a fantastic phone company that also happens to make some computers. They have been degrading pretty bad.
It's also that Windows/Linux has many of these issues as well. It's not as if Windows 10 notifications are clear and intuitive. When I go to my desktop after a day of work, Windows will slowly replay every single slack message and email I got all day, one at a time, for almost an hour, as single notifications.
I think it's less that OS X is bad now, but more that it's finally degraded to a level of annoyance that people just have gotten enured to with Windows. It's not to say that that's a good thing, but at this point, I have known bugs and annoyances with all of the computers I work with, no matter the platform.
Some of it is also that Apple has a "real" integrated ecosystem. To what you say, you can easily move things between iOS and OS X. If you're watching stuff on your Mac, you can throw it to an Apple TV or your Airpods. Windows doesn't have a version of that that "just works". The closest you get is opting into Google's ecosystem and going Chromecast/Android, but I'd rather not trust Google with even more of my info.
Honestly I'd dare to say most of Apple's market right now is purely from vendor lock-in. Both their hardware and software are getting worse, but not bad enough for people to switch their entire digital lives to a different ecosystem.. not yet, anyway.
My first Mac was an employer-provided MBP in... oh, 2011 or so. Before that I'd used DOS, Windows (3.1 and up, including NT4 and 2K) and Linux (Mandrake, Debian, Gentoo, Ubuntu, roughly in that order with a little Redhat and Fedora here and there). I'd seen some early OSX server edition thing, but not really used it, and I'd used pre-OSX Macs at school (hated them, "it's more stable and faster than Winblowz" my ass). Some exposure to Solaris, too. Used BeOS (loved it) and QNX on home machines for various purposes, as well.
The MBP was the first laptop I'd used that 1) had a trackpad good enough that I didn't feel like I needed to carry a mouse around to use it for more than 10min at a time, and 2) had battery life good enough that I didn't feel like I needed to take my power supply with me if I'd be away from my desk for more than an hour. It had every port I was likely to need for most tasks. In short, it was the first time I'd used a laptop that was actually usefully portable as a self-contained device. They kinda ruined that appeal by going all-USB-C and The Great Endongling, but that's another story.
It was also very stable, and over time I came to really appreciate the built-in software. Preview's amazing (seriously, never would have thought a preview app would make a whole computing platform "sticky" for me, but here we are, it's that good), Safari's the only browser that seems to really care about power use, terminal's light and has very low input latency, it comes with a decent video editor, an office suite I prefer over anything I've used on Linux, and so on. In short it's full of good-enough productivity software that's also light enough on resources that I don't hesitate to open them, and often forget they're still open in the background.
These days I like having a base OS that's decent, includes the GUI and basic productivity tools, and that's distinctly separate from my user-managed packages (homebrew) rather than having them all mixed up together (yes, I could achieve this on Linux, if it had a core, consolidated GUI/windowing system so various apps weren't targeting totally different windowing toolkits, but it doesn't, so separating a capable and complete GUI "base OS" from the rest of one's packages gets tricky). There are quite a few little nice-to-haves littered around the settings and UI. Most of the software is generally better polished UX wise than Linux or Windows, and that doesn't just mean it's pretty—it performs well and, most importantly, consistently. There are problems and exceptions to "well and consistently" but there are so many more issues on competing platforms that even if it's gotten worse, it's still much nicer to use.
Given the premium on hardware (that's come and gone—at times there almost wasn't one if you actually compared apples to apples [haha], but right now it's large) I'd rather use Linux (or, well, probably a BSD, but that'd mean even more fiddling most likely) but the only times that's seemed to function genuinely well and stably compared to its competition was when I either kept tight control over every aspect of the system (time-consuming, meant figuring out how to do any new thing I needed to do that other systems might do automatically, which wasn't always a great use of time to put it mildly) or in the early days of Ubuntu (talking pre-Pulse Audio, so quite a while ago) which was really sensible, light, and polished for a time.
I do still run Windows only for gaming, and Linux on servers or in GUI-equipped VMs for certain purposes.
It's not just Apple, I've got a Mi phone, sometimes reminders pop up hours after they happened. They've mucked around with the default android lockscreen to save power and I think this is causing the problem.
The devices are so complicated now that they cant do their most basic functions right.
> The calendar notification is very random. Sometimes it fires, sometimes it does not. I have missed couple of meetings because notification popped after the meeting was over.
I see something similar and assumed this happens because Mail / Calendar are relying on ics attachments (not sure what the behaviour is with the Gmail integration). I believe this means that if Mail is closed you don’t get Calendar updates until you open both and refresh.
Either way I find I have to refresh Mail and Calendar a lot to keep them in sync.
Heating is either a hardware issue or something like a broken program running continuously - on a normal setup that doesn’t happen.
Calendar / Todos depends on the backend. If you’re using Exchange, check the settings to confirm that it’s not set to poll every hour or something like that.
varargs functions require at least one non-vararg parameter.
(As int_19h said:) On some architectures, vararg functions have their own distinct ABI, so (...) as a declaration is not compatible with any definition that doesn't also use (...).
It is an extra ordinary feat that the OP took only couple of weeks to create a working programming language. I always thought it takes years to create a new language. It will be interesting hear from OP, how he could create a new language in couple of weeks
A programming language is a translation scheme. If you know your scheme upfront, it's trivial to translate. C-like languages are easier to translate to binaries because well, C is glorified assembly. The analysis parts may take longer, but when you start writing a language you don't need to do much analysis, especially when writing a C-like language. Parsing can be automated, and given a clean grammar like Go's it's trivial to parse
> Multiple function templates that match the argument list of a function call can be available. C++ defines a partial ordering of function templates to specify which function should be called. The ordering is partial because there can be some templates that are considered equally specialized.
> The compiler chooses the most specialized template function available from the possible matches.
Essentially, it is the algorithm by which the C++ compiler selects what template specialization to use at your 'call' site. Function templates can be overloaded and essentially create an entire family of specializations. As a C++ programmer, you either modify+compile+run until you get your desired call/specialization; or you take the plunge once your start to heavily rely on meta-template magic and try to grok the ordering scheme defined by the standard.
Crucially, this is completely different from say the way function overload resolution happens in C++. Fun stuff happens in your brain when you try combining different types of selection/overloading (ever tried combining template and regular function overloading on a templated class method?).
Being slower is not, I think, the major downside. It is that an entire class of errors - race conditions - are basically outside of the scope of the tool. Which is understandable! Race conditions are hard, and when I read about the tool, my first thought was "How are they handling race conditions?" and it turns out, essentially, they're not. But race conditions are also the hardest part about debugging multithreaded applications.
I'm not sure if the tool ensures deterministic scheduling of threads on the single core, but I doubt that it does. If it does not, then playbacks will not be deterministic on playback, which means you could encounter different race condition outcomes on playback. If it does, then while you may have deterministic playback, the tool is unlikely to help with the class of race conditions that require simultaneous execution.
To be clear: I'm not criticizing the tool or the work of the people. If I were to design such a tool, I would probably start with a single core as well. It seems like a valuable tool and great progress for software debugging. But I do think race conditions in multithreaded programs are a current limitation.
"RR preemptively schedules these threads, so context switch timing is nondeterminism that must be recorded. Data
race bugs can still be observed if a context switch occurs
at the right point in the execution (though bugs due to
weak memory models cannot be observed)."
The "weak memory model" part means it won't help with, say, debugging lock-free algorithms where you screw up the semantics.
You should read https://arxiv.org/abs/1705.05937 so you don't need to speculate. rr absolutely does guarantee that threads are scheduled the same way during replay as during recording, otherwise it wouldn't work at all on applications like Firefox which use a lot of threads.
Very cool stuff! And yes, I took a look at the paper, as I noted in my edit. But I think there's still two classes of race conditions outside of its scope: ones that require simultaneous execution (where you can get surprising interleavings) and lock-free algorithms where correct use of the memory model is paramount. In my personal experience, these are the hardest problems to debug.
Even those are probably not 100% outside of its scope. I forget the details of chaos mode, but that kind of induced thread-switching can cause just the kind of interleaving you seem to be talking about.
What rr cannot capture is a very small subclass of race conditions involving things like cache line misses - I think that's what you're alluding to by "correct use of the memory model is paramount" but it's a subclass even of those. Yes, those are hugely difficult to diagnose and it would be fantastic if tools like rr or UndoDB could capture them. But there's a vast swathe of also very difficult race conditions that this recording tech can and does help with today.
Say you have a problem which is hard to reproduce — e.g. your service gets slow in the early morning but the app-level metrics don’t show anything unusual, and maybe the problem duration is too short for someone to easily catch it in action.
You could set a trigger based on CPU or memory load so e.g. the next spike will capture a few dumps over a set period of time. You don’t have to deal with a ton of data from a simple periodic trigger or having to try to time an irregular event — nothing world-changing but a nice time-saver.
How does FaunaDB achieves atomicity without some sort of 2 phase commit? What I understood from the article is that each node in a replica commits independently from the distributed transaction log. So, if a transaction updates data in multiple nodes in a replica then it can happen that one of the commit in one of the nodes fails because of some failure. In that case will there be partial commit?
I am not sure how you can generalize. I would not consider it as a red flag unless the candidate in question has history of hopping too quickly. I have direct/indirect experience of being both the sides. I once left a company in couple of months because company's culture was totally different than what I expected. I have also seen a friend of mine getting fired in couple of months after joining a company because company felt he was a misfit
It is absolutely a red flag. If you've committed to another company, then you have no business interviewing elsewhere.
If you don't take a job seriously at one company, why would I expect you to take a job seriously at my company?
If I offer you a job, how can I be sure you'll show up for it, and not just take the next offer that comes along? I can't. So you're no longer in consideration as soon as you tell me you're disloyal to your other company.
In some industries in America (especially media companies, but it's also common in high-end retail), if your boss hears that you're looking for another job, you're immediately fired. Sometimes it's even written into the contract, if you have one.
I think the key word is "recently". It might have been many months already for the interviewing guy, but it sounds more like 2-8 weeks ago, and that is really fast (==bad) job hopping.
I guess it depends on which team you are in. I have worked for some of the well know MNCs(not FANG) in Bangalore, some of the teams I was part of did really hard core development work. Some of my team members in Bangalore were more competent than our US counterpart.