As for correctness, they mentioned the LLM citing links that the person can verify. So there is some protection at that level.
But, also, the threshold of things we manage ourselves versus when we look to others is constantly moving as technology advances and things change. We're always making risk tradeoff decisions measuring the probability we get sued or some harm comes to us versus trusting that we can handle some tasks ourselves. For example, most people do not have attorneys review their lease agreements or job offers, unless they have a specific circumstance that warrants they do so.
The line will move, as technology gives people the tools to become better at handling the more mundane things themselves.
But if you dont know anything about programming a link to a library/etc is not so useful. Same if you dont know about tax law and it cities the tax code and how it should be understood (the code is correct but the interpretation is not)
This is the real value of AI that, I think, we're just starting to get into. It's less about automating workflows that are inherently unstructured (I think that we're likely to continue wanting humans for this for some time).
It's more about automating workflows that are already procedural and/or protocolized, but where information gathering is messy and unstructured (I.e. some facets of law, health, finance, etc).
Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs, your medical history, your preferences, etc. But gathering all of that information requires a mix of collecting medical records, talking to the patient, etc. Once that information is available, we can execute a fairly procedural plan to put together a diet that will likely work for you.
These are cases that I believe LLMs are actually very well suited, if the solution can be designed in such a way as to limit hallucinations.
I recently tried looking up something about local tax law in ChatGPT. It confidently told me a completely wrong rule. There are lots of sources for this, but since some probably unknowingly spread misinformation, ChatGPT just treated it as correct. Since I always verify what ChatGPT spits out, it wasn't a big deal for me, just a reminder that it's garbage in, garbage out.
Yeah, I also find very often llms say sth wrong just because they found it in the internet. The problem is that we know to not trust a random website, but LLMs make wrong info more believable. So the problem in some sense is not exactly the LLM, as they pick up on wrong stuff people or "people" have written, but they are really bad at figuring these errors out and particularly good at covering them or backing them up.
O3's web research seems to have gotten much, much better than their earlier attempts at using the web, which I didn't like. It seems to browse in a much more human way (trying multiple searches, noticing inconsistencies, following up with more refined searches, etc).
But I wonder how it would do in a case like yours where there is conflicting information and whether it picks up on variance in information it finds.
I just asked o3 how to fill out a form 8949 for a sale with an incorrect 1099-B basis not reported to the IRS. It said (with no caveats or hedging, and explicit acknowledgement that it understood the basis was not reported) that you should put the incorrect basis in column (e) with adjustments in (f) and (g), while the IRS instructions are clear (as much as IRS instructions can be...) that in this scenario you should put the correct basis directly in column (e).
I think this will be fixed by having LLM trained not on the whole internet but on well curated content. To me this feels like the internet in maybe 1993. You see the potential and it’s useful. But a lot of work and experimentation has to be done to work out use cases.
I think it’s weird to reject AI based on its current form.
"Hallucination" implies that the LLM holds some relationship to truth. Output from an LLM is not a hallucination, it's bullshit[0].
> Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs
No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive. and I would know, I've had to use one to help me manage an eating disorder!
There is already so much bullshit in the diet space that adding AI bullshit (again, using the technical definition of bullshit here) only stands to increase the value of an interaction with a person with knowledge.
And that's without getting into what happens when brand recommendations are baked into the training data.
I find this way of looking at LLMs to be odd. Surely we all are aware that AI has always been probabilistic in nature. Very few people seem to go around talking about how their binary classifier is always hallucinating, but just sometimes happens to be right.
Just like every other form of ML we've come up with, LLMs are imperfect. They get things wrong. This is more of an indictment of yeeting a pure AI chat interface in front of a consumer than it is an indictment of the underlying technology itself. LLMs are incredibly good at doing some things. They are less good at other things.
There are ways to use them effectively, and there are bad ways to use them. Just like every other tool.
The problem is they are being sold as everything solutions. Never write code / google search / talk to a lawyer / talk to a human / be lonely again, all here, under one roof. If LLM marketing was staying in its lane as a creator of convincing text we'd be fine.
This happens with every hype cycle. Some people fully buy into the most extreme of the hype, and other people reverse polarize against that. The first group ends up offsides because nothing is ever as good as the hype, but the second group often misses the forest for the trees.
There's no shortcut to figuring out what the truth of what a new technology is actually useful for. It's very rarely the case that either "everything" or "nothing" is the truth.
Very true, I think LLMs will be very good at confirming whatever bias you have. Want to find reasons why unpasturized milk is good? Just ask an LLM. Want to find evidence to be an antivaxxer? Just ask an LLM!
> "Hallucination" implies that the LLM holds some relationship to truth. Output from an LLM is not a hallucination, it's bullshit[0].
I understand your perspective, but the intention was to use a term we've all heard to reflect the thing we're all thinking about. Whether or not this is the right term to use for scenarios where the LLM emits incorrect information is not relevant to this post in particular.
> No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive.
No, this is not why real dietitians are expensive. Real dietitians are expensive because they go through extensive training on a topic and are a licensed (and thus supply constrained) group. That doesn't mean they're operating without a grounding fact base.
Dietitians are not making up nutritional evidence and guidance as they go. They're operating on studies that have been done over decades of time and millions of people to understand in general what foods are linked to what outcomes. Yes, the field evolves. Yes, it requires changes over time. But to suggest we "don't know" is inconsistent with the fact that we're able to teach dietitians how to construct diets in the first place.
There are absolutely cases in which the confounding factors for a patient are unique enough such that novel human thought will be required to construct a reasonable diet plan or treatment pathway for someone. That will continue to be true in law, health, finances, etc. But there are also many, many cases where that is absolutely not the case, the presentation of the case is quite simple, and the next step actions are highly procedural.
This is not the same as saying dietitians are useless, or physicians are useless, or attorneys are useless. It is to say that, due to the supply constraints of these professions, there are always going to be fundamental limits to the amount they can produce. But there is a credible argument to be made that if we can bolster their ability to deliver the common scenarios much more effectively, we might be able to unlock some of the capacity to reach more people.
FYI: The actual study may not quite say what this article is suggesting. Unless I'm missing something, the study seems to focus on employee use of chat-based assistants, not on company-wide use of AI workflow solutions. The answers come from interviewing the employees themselves. There is an analysis of impacts on the labor market, but that is likely flawed if the companies are segmented based on employee use of chat assistants versus company-wide deployment of AI technology.
In other words, this more likely answers the question "If customer support agents all use ChatGPT or some in-house equivalent, does the company need fewer customer support agents?" than it answers the question "If we deploy an AI agent for customers to interact with, can it reduce the volume of inquiries that make it to our customer service team and, thus, require fewer agents?"
I wonder if it's the platitude doing that, or the explicit affirmation that _most_ of it looks good, but just XYZ needs tweaking. That intention is explicit in the first message, and potentially implied but unclear in the second.
I'll share a different perspective to this whole founder mode debate: my instinct is that "founder mode" is a useless phrase and founders can be equal parts helpful or detrimental. The leaders who we would generally uphold as being highly successful founders who built respectable companies didn't necessarily do so through something intrinsic to being a founder, but rather by deeply _giving a shit_ about the products they build and the customer experiences of those products.
It follows that founders (or employees who were early enough to have a founding mentality), often, tend to care more about their products and services than anyone else will and this can lead to centralized decision making being highly effective. This is the case for Steve Jobs, Bill Gates, Howard Schultz, Brian Chesky, and Elon Musk (please set aside any recent personal opinions of him).
It also follows that "manager" CEOs and senior leaders are often hired into a role with different incentives that relate far less to them caring about the product, customer, or business, and more to the movement of specific metrics. In these cases, centralized decision making can lead to deterioration of a product experience to the point of irrelevance. This list might include Scott Thompson (Yahoo), Dennis Muilenburg (Boeing), etc.
I don't believe it is anything inherent to a "founder" or "manager." Simply put: centralized decision making can be effective in an organization where the centralized decision maker has the insights and is close enough to the customer and market to make good decisions. It can be massively detrimental if the person is detached from the customer and market.
It is just the case that a founder happens to be much more likely to be close to the customer and the market than a hired leader, and likely has much more of a desire to be so.
This might be wrong, but it has tracked in my career so far. I've seen great managers, horrible managers, great founders, and horrible founders. The only thing that has been consistent is that the great ones are _far_ more likely to deeply understand the customer they serve and the product they build than the bad ones.
Musk "was quite successful in the earlier days of Tesla". lol as if he's still not kicking butt. Did you see his company plucked a rocket the size of the Statue of Liberty out of the air?
I don't necessarily disagree. The comment was intended to avoid a flame war of someone arguing about his recent (personal) antics, not company performance. I updated the parenthetical to reflect this.
Hospitals and payers negotiate rates and contract at that rate before the service is provided. Assuming the service is not denied by the payer, the hospital knows that they'll only be reimbursed 18k from your insurance company (or at least has the data to know in advance, putting aside whether any one person could tell you what it will be). The 90k only served as a starting point for negotiation with payers and is usually obscenely high due to other regulatory and contractual reasons related to the negotiation process. Their "list rate" is shown on your bill, but was absolutely never expected to be received.
As a result, it's not a "loss" of revenue at the time of service, and isn't recognized as one.
Now, because GAAP requires revenue be recognized when realized and earned, that service became "revenue" to the hospital after service, even though they haven't been paid. They might later "write that off" (I.e. recognize a loss) if the payer ultimately denies that claim, or you refuse your responsibility (I.e. your copay). But in that case, the hospital did not, in fact, make the money.
There are plenty of industries in which purely being greedy is much easier than it is in healthcare. Greed alone does not explain the depth of complexity involved in the US healthcare pricing system.
To be clear, I’m not defending the system either. It’s fundamentally broken by design. But it’s certainly not solely the greed of hospitals that got us here.
Healthcare has a cartel restricting supply of new doctors, and new hospitals. Only a small number of new doctors can intern each year ensuring the relative supply decreases vs population. The same for hospitals - even if you have the money and the doctors you can't open one unless the other hospitals in the area agree one is needed and allow you to get a “Certificate of Need”.
But that wasn't enough to juice profits so pricing had to be made as opaque as possible to screw over anyone who isn't a giant insurance company ensuring the little guy without insurance who "pays his bills" pays more than 10x anyone else in the system.
I don't know of any other industry with this level of depravity and greed.
Important clarification: you do not need to have confidentiality obligations with respect to the information or a fiduciary relationship, it need only be information that is material and non-public information that belongs to the company (i.e. only available to those with a fiduciary responsibility or confidentiality obligation to the company). If an insider with confidentiality obligations shares material non-public information with a person who has no confidentiality obligation, and that person trades on that information, that would be insider trading.
The link you referenced also clarifies this point, but it is different from what is written in your comment.
Note: this doesn't change the fact that the answer in this particular case is no, it's not insider trading. You are, as parent mentioned, just the first to know the news.
That’s not quite correct, it depends on the nature of the disclosure.
Someone receiving information from an insider needs be independent of personal, financial, and quid pro quo relationships. So a random person that happens to sit next to a CEO on an airplane can trade on whatever they hear. The CEO’s mistress sitting on the other side of them can’t.
This exact scenario happened to me. I was flying United business class SFO to EWR and the guy next to me was writing a Powerpoint slide in 9000-point bold type "BUY XYZ CORP FOR 880 MILLION" and when I got to work a few hours later our counsel advised me that it was not at all improper to trade on that information, which we did.
Well, I’ve figured out how I’m spending my spring break- buy a ton of cheap stock in some random startup, dress head-to-toe in Microsoft swag, and then spend a few days hanging out in various SFO lounges working on fake PowerPoints declaring intent to buy the startup
Seems good. But I also feel like if you're the kind of person who can command the disposition of a billion dollars, just stop writing slides. Stand up in front of the board and say what you came to say and then sit back down.
> So a random person that happens to sit next to a CEO on an airplane can trade on whatever they hear.
I am under the impression that this would also be illegal, because trading on the basis of MNPI is itself illegal, irrespective of insider/outsider status.
Overhearing could fall under the "misappropriation theory" of insider trading. If you run into "confidential" (material non-public) information about the security, you still would be committing fraud. [1]
But then the passenger could claim that they did not the person next to them was Elon Musk, and that when Elon said over the phone "whoever shorts Tesla stock now will become a billionaire next month" they thought it was some random guy giving his 2c on the trade.
> Before U.S. v. O’Hagan, 521 U.S. 642 (1997), individuals could only be liable for insider trading under the classical theory of insider trading. In U.S. v. O’Hagan, the U.S. Supreme Court faced a scenario where a partner at a large law firm purchased stock futures in a company conducting a tender offer based on inside information that he gleaned from other partners at the firm working on the deal. Although the partner had no fiduciary duty to the companies in whose stock he traded, the Supreme Court found him liable under Rule 10 b-5 on the grounds that he used confidential information to trade securities. The Court reasoned that such insider trading is fraudulent because it is akin to embezzlement; that is, the owner of the confidential information has exclusive use of such information, and the trader misappropriates that information by trading on it and not disclosing the use of the information to the owner of the information.
But how does this square with rumors? Is trading based on rumors illegal then?
"I heard a rumour that their defect rate is very high for this new product."
Information that isn't meant to be public might still send up circulating sure to mistakes etc. How would you determine whether trading based on it would be legal or not?
This has nothing to do with ignorance of the law, it's about intent.
You can be fully versed in insider trading law, receive some information that you reasonably assume isn't protected, trade on it, and that's not insider trading.
If that weren't the case, every single person who traded a stock after some MNPI was inadvertently broadcast/published would be guilty of insider trading.
> The principle is that it is illegal to trade on the basis of market-sensitive information that is not generally known. This is a much broader scope than under U.S. law. The key differences from U.S. law are that no relationship to either the issuer of the security or the tipster is required; all that is required is that the guilty party traded (or caused trading) whilst having inside information
Agreed. It absolutely could be if it’s technically non-public information. That’s the entire point of the regulations. Just because you don’t work for the company doesn’t make it not insider trading if you act off information the company didn’t disclose.
> it need only be information that is material and non-public.
I think this is wrong as well. Suppose you are a independent technician repairing cars. Over time you notice, that, say BMW car quality used to be good but has gone to shit. That's not public information, but you would be allowed to short BMW stock in the hopes that, once public catches on, their share price will tank.
In fact half the point of stock trading if for you to do research, including your own investigation and testing. And then use that as an advantage. In the process you are bringing the price close to it's true value.
The gp's wording is a little confusing but he's just trying to explain the transitive logic of non-public information transferring from a "true insider" to an outsider is also "insider trading" and thus illegal. Think of it as the provenance of information coming from an insider.
E.g. Martha Stewart is an "outsider" and not an insider of drug company ImClone but she was found guilty of insider trading because she did get confidential information from insiders at the Merrill Lynch brokerage that handled stock trades for the ImClone CEO: https://www.sec.gov/news/press/2003-69.htm
Your scenario of a mechanic repairing cars, or somebody counting the number of cars in various Walmart parking lots, or a hacker that discovers a serious website vulnerability that may cause embarrassment and stock price drop ... none of those situations have a corporate insider in that information disclosure loop.
I can't find any evidence that she was ever charged with insider trading.
The judge dismissed a charge of "securities fraud", which claimed that she had defrauded investors in her own company by making false statements to the public.
The jury convicted her of "false statements", obstruction, and conspiracy.
Slightly clarified my comment via a parenthetical. "Non-public" in this context refers to information which would only be available to those with a fiduciary responsibility and/or a confidentiality obligation to the organization.
I was trying to avoid the use of "insider," because people tend to assume that means employees or directors, but that is not the case. Outside organizations who have, as an example, signed an NDA with the organization may learn of material non-public information, and trading on that could constitute insider trading.
> "Non-public" in this context refers to information which would only be available to those with a fiduciary responsibility and/or a confidentiality obligation to the organization.
Right, but information available to those with a confidentiality obligation can still be traded on if acquired legally. That's the crucial point. It's not enough for it to be non-public and material, you must also be in breach of a fiduciary duty (or acting in concert with somebody who is). For example, if a Boeing CEO was at a coffee shop discussing an upcoming acquisition at the table next to you, you'd be able to trade on it, even though it was confidential information not available to the general public and obviously material to Boeing's stock price.
It's not required – to my knowledge – that specific person disclosing the information be in breach of a fiduciary duty, as one could easily overcome that by disclosing to someone who discloses it to someone else, who then trades on it.
The scenario you mentioned is generally understood to be permissible, but it's not exactly clear to me why. Perhaps that the information became "public" (whether intentional or not) when discussing it in a public forum such as a coffee shop?
> If she put it on twitter could she legally trade on the tip?
IANAL, but if she traded a picosecond after tweeting: no. If she has zillions of followers and traded a year later: yes, because ’the public’ could be aware of the content of the tweet. A judge will have to decide on in-betweens. When doing that they likely will take into account how open Twitter/X is.
> If I saw the tweet and trade is that legal?
Again, IANAL, but I would think so, if she has ‘enough’ followers.
"Public" doesn't mean the company has publicly announced it - just that the information is available to the public. The situation you're describing is very similar to the Boeing situation above. You just happen to be the first person aware of the news, because your job provides you the ability to see a bunch of cars and understand how their quality is trending. Nor is it any different than you buying, say, one of the first Rivans, thinking the QC was horrible, and shorting the stock.
Regardless of when you learned it, the quality of BMW's cars (in this example) became public information when they started selling them to the public.
Now, however, if an internal employee told the technician that BMW had removed all QA checks from their line, and (s)he should expect quality to fall precipitously in the years ahead, that would be different.
Just because Car and Driver hasn't published an expose doesn't mean the information isn't public. Presumably lots of other independent (and non-independent) technicians have noticed the same thing. Your observation may be sampling error or not something that is sufficiently noteworthy to have percolated up to all the car forums out there en masse.
It's not insider trading to judge the quality of a product based on what you experience of the product in the wild and to make an investment decision accordingly. It's just being canny.
Now, if you learned from someone inside that they were going to do a recall but had not announced it yet, on the other hand...
That being said, I am sure that insider trading is widespread (e.g. above example). The thing is that is it not easy to detect unless you are already on the radar.
Seems to me the technician does have public information, he is not the only technician that has that data he might just be the only wise enough to notice the pattern
This is not insider information you got from the company. You just observed the world. Totally allowed. What would not be allowed is if you got hold of info from BMW that showed way more repairs than previously reported etc (and it was material for the company).
Anyone could have been on that plane provided they bought a ticket. The fact that nobody else was is irrelevant. The information was not only knowable to those with a fiduciary or confidentiality obligation to the company.
If you're traveling as a regular passenger, you still would not have a fiduciary relationship with Boeing and you have no confidentiality obligations regardless of who else is on the plane.
What does "non-public" mean here? If some information gets leaked without authorization by an insider (like when people leak stuff online...), (when) does that become public?
What if you infer it from a person that does have a privilege position.
Here’s the scenario. During acquisitions, acquiring company sometimes use market research companies to reach out to former execs at the company as part of their diligence.
Can you trade long if you just receive a bunch of requests from market research firms but never actually talk to the acquiring company?
If you infer it, rather than being told by the research company then it’s probably on the “not” end of the “insider trading” spectrum. The SEC could still charge you but it would be hard to prove how you inferred the information
There's an ongoing insider trading case where an executive at Company A learned that Company A by might acquired. He then bought call options on his closest competitor assuming that the news of Company A being acquired would cause the value of the competitor to also increase.
Depends on how well connected you are to the establishment whether a prosecutor would try to bring charges on more novel fact patterns. Rule by law vs rule of law and all that.
> If an insider with confidentiality obligations shares material non-public information with a person who has no confidentiality obligation, and that person trades on that information, that would be insider trading.
Is this transitive? If the person with no confidentiality tells a 3rd person and that 3rd person trades, is that still insider trading?
You're muddying the waters here, the original poster is correct, but with a few scenarios for outsiders. For example, a company that printed the financial statements of companies, had no NDAs, was trading on the data, and was convicted of insider trading because they knew the data was company confidential information.
Theft from the company is the central tenet, whether you are an insider, have a fiduciary responsibility, or an outsider who comes across data from inside the company.
Material nonpublic information that isn't taken from the company is fair game, thus all the quant funds that collect detailed market intelligence and trade on it (or the posted example, a passenger on the plane who knew the news ahead of the public). It doesnt matter one whit whether the information was material or public, it matters only that it wasn't taken from Boeing
EDIT: I was involved in the early days of a company that sold data to quant funds, and spent many hours with lawyers on exactly this question
> or an outsider who comes across data from inside the company
Doesn't it matter how you came across that data? If you were at a coffee shop and happened to overhear a bunch of Boeing engineers talking about how they were replacing bolts with hot melt glue I thought you could you trade on that. If they explicitly told you that they were replacing the bolts with hot melt glue, then you wouldn't be able to.
Unless you shad signed an NDA that stated they would not disclose non-public info and then told you about the glue, in which case you could still trade on it.
It does seem quite odd to say "it doesn't matter one whit whether the information was material or public" when insider trading is defined as: the trading of a company’s securities by individuals with access to confidential or material non-public information about the company.
Further, I struggle to understand how one could learn information which is non-public without "theft" of that information. It would seem that, by definition, if the organization begins sharing that information with individuals who have no confidentiality obligation, they have now made that information public.
What does tend to happen often is that others assume "public" means "written in the news" and that is certainly not the case. There are plenty of things that are knowable by the public but not obvious, and it's perfectly fine to trade on that.
> Illegal insider trading refers generally to buying or selling a security, in breach of a fiduciary duty or other relationship of trust and confidence, on the basis of material, nonpublic information about the security.
My statement was copied from Cornell Law's definition [1].
But, yes, all of these shorthand definitions are designed for the general public's consumption, and skip over specific nuances - including the SEC's definition. The sentence read above would seem to permit a person with a fiduciary duty to share information with someone who does not have one, and for that person to trade based on the information. However, we know this is not permitted.
In any case, I think my comment still stands. I specifically called out in my parenthetical in the original comment that the information would need to be knowable only by those with a fiduciary or confidentiality obligation to the company. This seems to cover your comment and sibling's concern.
Example: logs of search queries that suddenly trend with adverse information about companies. Those logs are not public, in fact you need to buy them, but they have real signal (thus material and nonpublic), and are perfectly legal to buy and use. Satellite photos to estimate material stacking up outside a factory, or how many cars are in the parking lots of retail stores. Mobile data that has been statistically tied to foot traffic in stores. Credit card purchase data (not public! very material! perfectly ok!) I could go on forever
Go ask a lawyer this is a big space
EDIT: Yes exactly, ITS HAS TO BE CONFIDENTIAL TO THE COMPANY AND THUS TAKEN FROM THE COMPANY LIKE I SAID ABOVE. Your explanation implicated all the cases I described. You haven't seen how explicitly rich are the sources that I mentioned above, they are very very definitely information about the companies that are traded
My explanation did not implicate the cases you described above, because it explicitly said "only available to those with a fiduciary responsibility or confidentiality obligation to the company"
Regardless of the level of fidelity, if you got that information from an unaffiliated third party entity who captured it in the delivery of their own services, it is not "only available to those with a fiduciary responsibility or confidentiality obligation to the company"
It sounds like we are saying the same thing and you don't feel my original comment was clear enough. That's fine feedback. But there's no substantive disagreement. The points you listed above are all fine to trade on.
We did this at a previous startup and it had nothing to do with smoking guns. We were in healthcare and, although we had a strict policy against posting any PHI in slack, we still purged history in the off chance that it did end up somewhere. It’s just another layer to privacy protections.
It also had the nice benefit of forcing people to store actual knowledge in Notion where it could be organized and more easily discoverable.
Anyway, all this to say - there are many reasons to purge slack history that aren’t nefarious (even if the practice is ill advised by some)
Unless this user has never used a device running something other than iOS, they've already dealt with this.
But this concern requires a few things to be true:
- An alternative app store is created that does not employ any form of restriction to protect users from this
- Legitimate apps that an end user needs see value in publishing themselves on this alternative app store
- There is a critical mass of users that prefer the alternative app store, such that the legitimate app publisher no longer sees value in publishing to Apple's app store
- As a result, those users who would have preferred the privacy and safety that Apple provides are now forced to use the new app store
This is a possible doomsday scenario, but it's not clear to me that enforcing and protecting a market in which Apple is effectively guaranteed a profit on everyone else's apps is the right solution? If this were to happen, perhaps we address those apps through direct legislation that targets user privacy, akin to what the EU has started to move on? Or a solution similar to this.
I think you may be missing part of what the GP is saying: the concern isn't (primarily) that an alternative App Store will overall become more popular than Apple's. It's that specific apps like Facebook will create their own App Stores whose primary purpose is to distribute their one or small number of apps without the restrictions Apple places on privacy.
So the Facebook app (and Instagram, WhatsApp, and whatever else Facebook owns these days) would be able to collect as much data as the OS itself allows, without any kind of warnings before install.
It could potentially even use private APIs of some sort to bypass Apple's OS-level permissions dialogs and collect data without even asking the user first—it's unclear, at this point, to what extent Apple would be able to police this sort of behavior from motivated bad actors like Facebook when they're not being distributed through Apple's App Store.
iOS is sandboxed so they can’t do anything outside of the context of that app. To use any APIs that would require a permission dialogue you have to make a request to get a handle on them which only happens after a user grants permission, you can’t just reach into these. This is baked into the OS. Apps like the ones you’re talking about already hoover up is much data as possible in this context.
iOS itself is actually really good at this and can be improved and hardened further, Apple wants you to believe it’s somehow their review process and strict distribution channel that makes this possible.
Sandboxing insufficiently addresses this problem as permissions can be granted by the user for legitimate purposes and then once granted they could be abused to violate user privacy.
Yeah I understood that part. My argument is that Facebook would likely lose a lot of users if they forced all users to download Facebook via their App Store. There is still an incentive for them to use Apple’s.
But beside that point, Facebook has been able to force side loading on Android for years and still distributes via the Play Store, so that’s probably at least some evidence that they believe that distribution channel to be worthwhile.
> Unless this user has never used a device running something other than iOS, they've already dealt with this
Right, and they hate it. There's a reason why the iPhone and App Store were such a massive hit. People will choose convenience and security over freedom most of the time.
I hesitate to respond because the idea that Apple is a 3T marketing firm is moronic
Let’s not pretend Apple isn’t building the best consumer silicon in the world and the most secure and user-friendly consumer operating systems in the world. That’s not even considering their supply chain innovations.
Apple devices are superior to their competition in just about every objective measure.
Apple gained popularity long, long before they started developing their own silicon. Sure, they're doing nice things with their vast fortunes, but let's not pretend that's why they're perceived as a luxury product. It's all in the marketing.
You might want to let Jonny Ive know his design org never made a difference lol
Apple was in shambles before the iPod/iPhone. You think the iPhone succeeded because of marketing? You don’t think the fact that they made multi-touch displays easy to use was relevant to their success?
Apple products are so desired that people get resentful when they can’t afford them. Then they post moronic takes on HN from their Samsungs
Let’s be real most people don’t give a shit (unless they’re into emulators apparently), this is really about companies trying to slice up the pie between them.
One complication of this is that sales taxes can apply down to the city level, meaning the amount the vendor collects after tax can be somewhat variable. This would make a lot of sense if we had a federal sales tax that was just distributed to states based on the purchase location.
But, also, the threshold of things we manage ourselves versus when we look to others is constantly moving as technology advances and things change. We're always making risk tradeoff decisions measuring the probability we get sued or some harm comes to us versus trusting that we can handle some tasks ourselves. For example, most people do not have attorneys review their lease agreements or job offers, unless they have a specific circumstance that warrants they do so.
The line will move, as technology gives people the tools to become better at handling the more mundane things themselves.
reply