> There is one problem, though, that I find easily explainable. Place a token at the bottom left corner of a grid that extends infinitely up and right, call that point (0, 0). You're given list of valid displacement moves for the token, like (+1, +0), (-20, +13), (-5, -6), etc, and a target point like (700, 1). You may make any sequence of moves in any order, as long as no move ever puts the token off the grid. Does any sequence of moves bring you to the target?
If someone gives you such a sequence, it seems trivial to verify it in linear time. Even for arbitrary dimensions, and such witness can be verified in linear time.
To be in NP, witness must be verifiable in polynomial time with respect to the size of the original input. In this problem (VAS Reachability), the solution can be `2^2^2^...^K` steps long. Even if that's linear with respect to the witness, it's not polynomial with respect to the set of moves + goal.
Hmm.. I'd love to see a more formal statement of this, because it feels unintuitive.
Notably the question "given a number as input, output as many 1's as that number" is exponential in the input size. Is this problem therefore also strictly NP-hard?
> Hmm.. I'd love to see a more formal statement of this, because it feels unintuitive.
The problem is called "Vector Addition System Reachability", and the proof that it's Ackermann-complete is here: https://arxiv.org/pdf/2104.13866 (It's actually for "Vector Addition Systems with states, but the two are equivalent formalisms. They're also equivalent to Petri nets, which is what got me interested in this problem in the first place!)
> Notably the question "given a number as input, output as many 1's as that number" is exponential in the input size. Is this problem therefore also strictly NP-hard?
(Speaking off the cuff, so there's probably a mistake or two here, computability is hard and subtle!)
Compression features in a lot of NP-hard problems! For example, it's possible to represent some graphs as "boolean circuits", eg a function `f(a, b)` that's true iff nodes `a,b` have an edge between them. This can use exponentially less space, which means a normal NP-complete problem on the graph becomes NEXPTIME-complete if the graph is encoded as a circuit.
IMO this is cheating, which is why I don't like it as an example.
"Given K as input, output K 1's" is not a decision problem because it's not a boolean "yes/no". "Given K as input, does it output K ones" is a decision problem. But IIRC then for `K=3` your input is `(3, 111)` so it's still linear on the input size. I think.
My point here more than anything else is that I find this formulation unsatisfying because it is "easy" to verify that we have a witness, but is exponential only because the size of the witness is exponential in size.
What I'd like as a minimum example for "give me something that is NP-hard but not NP-complete" Is a problem whose input size is O(N), whose witnesses are O(N), but which requires O(e^N) time to validate that the witness is correct. I don't actually know that this is possible.
I disagree with the formulation of "decision problem" here. The problem, properly phrased as a decision problem, is "For a given K, does there exist a string with K 1s".
While it is straightforward to answer in the affirmative, and to produce an algorithm that produces that output (though it takes exponential time). To validate that a solution is in fact a valid solution also takes exponential time, if we are treating the size of the input as the base for the validation of the witness.
In the definition of NP, you get the input in unary, so that gets rid of that exponential. The issue with VAS is that the number of moves required could be extremely large.
There's a similar situation in chess (at least if generalized to N by N). You could assert Black has a forced win from a given position, but verifying the claim requires checking the entire game tree, rather than a succinct certificate. Generalized chess is in fact PSPACE-complete, which is generally believed to be outside of NP.
It needs to be a decision problem (or easily recast as a decision problem).
"given a number as input, output as many 1's as that number" doesn't have a yes or no answer. You could try to ask a related question like "given a number as input and a list of 1s, are there as many 1s as the number?" but that has a very large input.
Straightforward to pose as a decision problem -- you're given a sequence of {0,1} representing a base-2 input (N), output a string of {0, 1} such that there are N 1's. You could make it "N 1s followed by N 0s" to avoid being too trivial, but even in the original formulation there are plenty of "wrong" answers for each value of N, and determining whether a witness is correct requires O(2^b) time, where b is the number of bits in N.
That's not a decision problem either. You need to cast it in the form of determining membership in some set of strings (the "language"). In this case the natural choice of language is the set of all strings of the form "{encoding of n}{1111....1 n times}", which is decidable in O(log(n)+n) steps, i.e. linear time.
Linear time in the length of the sequence, yes. But is the sequence length linear in dimension size, or number of moves given? Thats what is interesting.
This applies to any computable problem though, no? At minimum the verifier has to read the witness. If we ignore PCPs and such.
The point here is that the witness grows very fast in terms of vector dimensionality and/or move set size.
Proving it's nonpolynomial is pretty easy: just make the goal vector `(10, 10, 10)` and the displacement vectors `{(1, 0, 0), (-10, 1, 0), (-10, -10, 1)}`. It takes ~1000 steps to reach the goal, and if we add one more dimension we need 10x more steps.
So it grows, at least, exponentially. Back in 1970ish someone found a problem that's at least double-exponential, proving it was 2-EXPTIME-hard. It was conjectured to be 2-EXPTIME-complete for the longest time, but turned out to be significantly harder.
Sounds implausible… Number of computational steps to find the shortest sequence maybe grows with Ackermann function. But length of the shortest sequence?
The crazy thing about the definition of NP-completeness is that Cook's theorem says that all problems in NP can be reduced in polynomial time to an NP-complete problem. So if a witness to a problem can be verified in polynomial time, it is by definition in NP and can be reduced to an NP-complete problem.
If I can verify a solution to this problem by finding a path in polynomial time, it is by definition in NP. The goal here was to present an example of a problem known to not be in NP.
> The crazy thing about the definition of NP-completeness is that Cook's theorem says that all problems in NP can be reduced in polynomial time to an NP-complete problem.
What were you trying to say here? Cook's theorem says that SAT is NP-complete. "All problems in NP can be reduced in polynomial time to an NP-complete problem" is just a part of the definition of NP-completeness.
Wait, are you implying that regulations are hard to comply with, poorly documented, and enforcement is extremely selective to the point where they no longer achieve their intended function?
Big news if true. They should do something about that.
it's not that it's hard to comply, it's fighting malicious compliance which is hard. nevertheless, it's a good damn question why every single operator that has "accept all" and doesn't have "reject all" right there on the consent banner isn't fined on the spot.
I think the commission noted this behavior and malicious compliance is already factored into the DMA act. The "deregulation" of GDPR could as well be retrofitting all the lessons learned into the GPDR v2.
Worth noting first that this is not really the GDPR (nobody here has said that it is directly, but in other threads people are making that assumption), this is the ePrivacy Directive (which is probably what the EU should be revising in light of these universally hated popups).
The EU hands out arbitrary fines to large companies that range in the hundreds of millions of dollars, and ask companies to comply with these "technology-neutral" guidelines [1] which are so opaque that it is impossible to decipher when you are and are not in compliance with them.
> The methods for giving information, offering a right to refuse or requesting consent should be made as user-friendly as possible
This is wonderfully clear and explains exactly when you will and won't be the victim of extortion-level fines from the EU.
You call it malicious compliance; sure, but when this is what everyone else is doing, and you decide that you want to go against "industry norms" for your website, you are painting a giant target on your back.
I honestly don't see how your comment makes sense.
Tt's the GDPR (published in 2016) that mandates that consent must be freely given. Using a 2002 directive to justify your point is disingenuous. You could have selected instead the 2020 guidelines [1] that are extremely detailed and address this point explicitly:
[quote]Example 17: A data controller may also obtain explicit consent from a visitor to its website by offering an explicit consent screen that contains Yes and No check boxes, provided that the text clearly indicates the consent, for instance “I, hereby, consent to the processing of my data” […][/]
> You call it malicious compliance; sure, but when this is what everyone else is doing, and you decide that you want to go against "industry norms" for your website, you are painting a giant target on your back.
Non sequitur. Surely refusing to engage in malicious compliance paints _less_ of a target on your back, especially when that "malicious compliance" is actually non-compliant.
I mean, have you ever had to deal with regulators? Departing from industry norms will 100% be used against you in any regulatory proceeding, no matter how minor. It is naive to think otherwise. Regulators go after big pockets and
The guidelines you link to are advisory, not legal, and they trace back to the ePrivacy regulations (although the notion of "consent" was modified by the GDPR; it's not clear which interpretation applies -- ePrivacy regulations, which are still in effect, also require consent). "The obligation is on controllers to innovate to find new solutions that operate within the parameters of the law and better support the protection of personal data and the interests of data subjects." This is standard boilerplate shit that says "you have to follow the regulations, not whatever is in this doc".
I honestly don't know what to tell you. The cookie popups are an offense in every possible way; they fail to accomplish their intended purposes, they burden users with useless interactions that provide no protection, and they burden website developers with useless busywork to document compliance to hopefully avoid retaliatory punitive fines if you draw the attention of regulators or EU officials. That these policies find supporters on HN of all places is beyond my comprehension.
> There are already far, far too many examples of physicians not trust patients about pain.
I am friends with a couple of ER doctors, who are probably the worst offenders (self-acknowledged) in this space. It's based on a real phenomenon, though, of drug-seeking behavior.
As people with chronic pain communicate with each other (through things like Reddit) on the best way to communicate to doctors that their pain is legitimate, those techniques are also inadvertently taught to other people who are seeking pain medication for recreational purposes.
I think the cause of widespread drug legalization has been weakened by a couple of real world efforts in that direction, but I still stubbornly cling to the belief that if people are allowed to make their own choices, then you can partition the recreational users from the chronic pain sufferers and maybe let medical science have a slightly better change of addressing the latter case. That said, given factors like cost and insurance coverage, it may just be a realigning of incentives rather than fixing the problem itself.
> but I still stubbornly cling to the belief that if people are allowed to make their own choices, then you can partition the recreational users from the chronic pain sufferers
I can empathize with this thought (having had an episode of pain disbelief in a hospital myself) but the idea of partitioning recreational users from chronic pain sufferers isn’t reflective of the reality.
They aren’t two mutually exclusive groups. In fact, many recreational users get their start from over-prescribed opioids. Some people experiencing pain and all of the associated emotional difficulties will see the sudden access to opioids as an opportunity or even an excuse to indulge in opioid excess.
Self-medication with opioids also produces a very quick on-ramp to dependence in average users. If you’re anything like me, you prefer to use the minimum dose of any medication and get off as quickly as possible. I’d rather have mild lingering headache pain than take an extra Ibuprofen.
Not so with the much of the general public. I have friends in medicine who believe even Tylenol should be prescription only because of how frequently they see people destroying their livers by taking excessive amounts. Look at simple drugs like Afrin nasal spray and people who become severely dependent for months or years because they can’t even read the directions on the bottle. Open this same door to something euphorically reinforcing like opioids and the number of people walking themselves straight into addictions because they wanted something stronger for the occasional headache would be massive.
>> you can partition the recreational users from the chronic pain sufferers
Except that you can't. There is no bright line between those two groups. Many recreational users/abusers started their journey when prescribed drugs for legitimate pain. Steady use becomes dependency, then you look for other sources, and quickly you are crawling dark web for a dealer in your neighborhood.
This does not clarify -- your initial post made a claim about 0^2, which (correctly) does not appear in this list.
Moreover it is trivial that there are no negative powers of 2 that have all even digits, since the trailing digit will always be 5. So the question reduces to whether there are powers of 2 greater than 2048 that have all even digits.
2^k mod 10 is never odd; it's the cycle (2, 4, 8, 6).
Related here is the length of the cycles mod 2^k, https://oeis.org/A005054. Interestingly, the number of all-even-digit elements in those cycles does not appear to be in the oeis, I get 4, 10, 25, 60, 150 as the first five terms.
This does appear to get more efficient as k gets higher; for k=11 I get a cycle length of 39,062,500 with an even subset of 36,105, meaning only .09% of the cycle is all-even.
This is all brute force; there's probably a more elegant way of computing this.
I see my error here -- you can in fact eliminate half even for 2^k mod 10, because both 6 and 8 force a carry; so ending in a 2 or a 6 means that the next higher digit must be odd.
The length of the cycles mod 10^k is simply Euler's phi function of 5^k: 5^(k-1) * 4 (or a factor of phi(5^k); AFAIK it is always exactly phi(5^k), although I don't have a proof of this handy).
The length of the even subset grows roughly as 2.5^k * 1.6. To see why, consider that the length of the cycle grows by a factor of 5 when incrementing k. Each all-even-digit power mod 10^k leads to 5 numbers mod 10^{k+1} which all share the same last k digits - i.e. their last k digits are even. We can model the k+1'th digit as being random, in which case we expect half of all those new numbers to consist entirely of even digits (one new digit, which is either odd or even, and k digits from the previous round that are all even). Thus, when incrementing k, the number of all-even-digit powers in the cycle will grow by approximately a factor of 2.5.
You only need to check enough digits to find an odd one. An odd digit appears in the low order (< 46 up to a high level) digits for the first quadrillion cases so you only need to compute 2^n mod 10^d where d is big enough to be safe. I used d=60 in my computations to take this to 10^15 candidates (with no additional terms found).
This is remarkable! I always find it fascinating that simple to express properties lack a proof. This is a very simple thing to evaluate and seems like it should be straightforward to establish that 2048 is the highest such power.
Base‑10 is just our chosen way of writing numbers, it doesn’t need to have any deep relationship with the arithmetic properties of sequences like the powers of 2. For most series (Fibonacci numbers, factorials etc), the digits for large members will be essentially random, their digits don't obey any pattern - it's just two unconnected things. It seems extremely likely that 2048 is the highest, but there might not be a good reason that could lead to a proof - it's just that larger and larger random numbers have less and less chance of satisfying the condition (with a tiny probability that they do, meaning we can't prove it).
Interestingly, there are results in the other kind of direction. Fields medalist James Maynard had an amazing result that there are infinitely many primes that have no 7s (or any other digit) in their decimal expansion. This actually _exploits_ the fact that there is no strong interaction between digits and primes - to show that they must exist with some density. That kind of approach can't work for finiteness though.
Yes, I find math problems that depend on base 10 to be unsatisfying because they rely on arbitrary cultural factors of how we represent numbers. "Real" mathematics should be universal, rather than just solving a puzzle.
Of course, such a problem could yield deep insight into number theory blah blah blah, but it's unlikely.
Everything about this seems so arbitrary. You look at the powers of an arbitrary number (here, 2), you pick an arbitrary base (here, 10) in which to express those powers, and ask for a random property of its digits (whether they belong to the set {0,2,4,6,8}).
Nothing about this question feels natural. I've noticed that random facts often don't have simple proofs.
In this case, it doesn't even help to downsize the problem. Erdős once asked the same question, but with powers of 2, base 3, and the set {0,1}. (If you want to, you can disguise that version as something more natural-looking like "Which powers of 2 can be expressed as the sum of distinct powers of 3?") But we're still nowhere close to solving it.
You can generalize it if you want. Given powers of p in base b, what is the largest n=p^i such that each digit is divisible by k. Here we have: if p=2, b=10, k=2, then n=2048 and i=11. Why? Maybe there is a deeper reason that applies to all values of p, b, k.
I mean, clearly it isn't in this case. But given that the digits of 2^n are cyclical at each decimal position, it does feel like this should fall out of some sort of chinese remainder theorem manipulation.
True. It might also just be that the question hasn't attracted the attention of number theorists, and finding a proof wouldn't be unreasonably difficult to an expert in the field.
Nope, it's not that easy in this case. E.g., Erdős conjectured in 1979 that every power of 2 greater than 256 has a digit '2' in its ternary expansion [0]. This makes sense heuristically, but no methods since then have come close to proving it.
Digits of numbers are a wild beast, and they're tough to pin down for a specific sequence. At best, we get statistical results like "almost all sequences of this form have this property", without actually being able to prove it for any one of them. (Except sometimes for artificially-constructed examples and counterexamples, or special classes like Pisot numbers.)
There are a couple of cooperative multiplayer minesweeper games around -- of note is [1], which is an infinite (or very large) persistent minesweeper world where you can just start solving subsets of the board, and move up leaderboards, etc.
In a sense; linear regression can be computed exactly so refers to a specific technique for producing a linear model.
Most artificial neurons are trained stochastically rather than holistically, i.e. rather than looking at the entire training set and computing the gradient to minimize the squared loss or something similar, they look at each training example and compute the local gradient and make small changes in that direction.
In addition, the "activation function" almost universally used now is the rectified linear unit, which is linear for positive input and zero for negative input. This is non-decreasing at least as a function, but the fact that it is not monotonic means that there is no additional loss accrued for overcorrecting in the negative direction.
Given this, using the term "linear regression" to describe the model of an artificial neuron is not really a useful heuristic.
The devil is in the details here. A land value tax in the Georgeist sense relies on being able to determine the unimproved value of the land.
This is impossible.
So it always fails.
I love markets, everyone loves markets. But markets only approach efficiency on a unit basis when there is a liquid market for fungible goods. Otherwise you can get aggregate pricing signals, but really no information about a specific item.
In NZ we pay property rates based on a combination of land and improvement value. It's been this way for a very long time. Removing the improvement portion and focusing only on the land value portion can only be a simplification - not a complication.
Official estimates of land value are common in many jurisdictions and nothing new.
Wherever Land Value Rating applies it has been adopted by poll of ratepayers, representing a lot of work and profound social concern. Wherever Capital or Annual Value Rating applies it has been imposed by Government or Councils, contrary to the express wishes of the ratepayers in almost every case.15
With certain exceptions16 local LVT, assessed through the LV system, was preferred where democratic choice was allowed, but this choice was removed in1988 by the Labour government, which revoked the democratic polls that had kept the local LVT in place for more than 130 years."
Yes -- taxing on the estimated full value of the property is a very common thing. Almost universally the case in the US as well. And you can use previous sale values and assessments to try to get what seems to be a market valuation.
Two problems though. One is that this is very anti-Georgeist; the main idea with the school of thought is that you should not be penalized for improving the land; you do not want counter-incentives to land improvement, because that's a net negative for neighbors and for society.
The other is that this process is very very very bureaucratic and corruptible. I can see for myself how this manifests in places I've lived because the estimated value for tax purposes is so wildly different than the actual sale price of real estate. I can't speak to NZ specifically as to how much of a problem this is and how it is addressed, but I'm going to go out on a limb and offer the hypothesis that it is poorly addressed and there is a big divergence between the estimates and the sale prices for real estate. Prove me wrong!
> being able to determine the unimproved value of the land
X = Unimproved land value
A = How much it would cost to build the building
B = Improved land value
A+X = B, solve for X, you get X = B-A.
We can measure A based on the building's floorplan and labor costs for similar size houses.
We can measure B using the same methods we currently use for figuring out land value for tax purposes (or if you don't like those, whatever improved method you might have in mind, e.g. last fair market sale price plus inflation rate since last sale date).
You might argue that "the cost of a thing doesn't represent its true value" but I'd classify that as a separate debate. Tax codes are generally written as if costs actually do represent value. E.g. a company's value according to its accounting books is often quite different than its market cap, but we generally tax companies based on their accounting books. (Even market cap has its downsides as a measure of value. My personal conclusion is that the notion of "value" itself gets awfully fuzzy, heuristic and imprecise outside of tautological corner cases like "1g of gold is worth 1g of gold".)
How much it would cost to build the building and the value of the land after improvement is not the same, for instance there are plenty of properties in St. Louis, Chicago, Gary, etc where the value of the property is less than the cost of building the structure on it. Does that mean the unimproved land value is negative?
Yeah, you'd need to include some kind of correction term for an A > B situation.
Building value can be negative, if the costs of renovation or demolition would be greater than the value of the land.
But building value could also be positive, but less than the cost of construction. Basically "It works fine and it's worth something, but if you built it today, the value you could sell it for wouldn't cover the costs."
Basically you need to design a formula for a requirement something like: "We need to proportion the improved land value between the building and the unimproved land. The building 'should get' 10% - 90% of the value. We start with cost, if that's near the middle of the range we accept it, but toward the ends of the range we modulate it (either with hard cutoffs or softer asymptotic blending)."
Then you need a separate case for where the building is negative value. In that case you could probably get by with an inspection and cost estimates for renovation / demolition.
You could also discount a building based on its age, say 1% per year up to 70%. This represents the fact that Joe's house would cost $200k to build today, but if you did that you'd get a house that's 0 years old. Joe's actual house is 60 years old, so to guesstimate the value of that 60-year-old house we take $200k and subtract 60% to get $80k. (The 30% minimum represents the fact that even a centuries-old house in good repair has some value.) Again, after the discounting you'd apply modulation to make sure it's 10% - 90% of the total value.
More likely the cost of building the structure needs to be adjusted for depreciation and/or cost of bringing it up to code. If a building costs more to fix up than starting from scratch, or effectively just needs to be demolished, clearly it's the building's value that's negative.
Although, there's no reason unimproved land value can't be negative, the land just needs to be burdensome.
The issue is that we're sorta back on the same issue as before where we need to estimate the value of the unimproved land except now it's (in my opinion) even more complicated. In this case, the structure doesn't need adjustment due to anything physically wrong it's just that no one is willing pay the theoretical value for it in that location. Think of an expensive anime wrap on a car, the wrap cost 2k or whatever but I don't care I really only want the car.
Good point on the value of unimproved land being negative, I was thinking of a different edge case. In the situation of "burdensome" land, maybe it does make sense for there to be negative value land with an associated tax credit if the owner is compelled to do something to remediate?
It is entirely possible and is not infrequently done. There are certain cases where it's fairly common. One is when there is a need to establish the value that a piece of land had at a certain time in the past before an improvement was made (e.g., to determine what tax liability existed at that time, or how assets were distributed among partners or heirs). Another is when someone needs an appraisal that separates the land and the improvements in order to convince someone to give them a loan to do something with the land that will involve getting rid of whatever's already built on it. This can happen for instance when there is an "underutilized" piece of land (e.g., a one-story building in a dense urban area).
It's true that determining the value of the unimproved land is not an exact science, but that is true of real estate appraisal in general.
I understand that it is more complicated and has wider error bars than property values, but is it really so difficult or need to be so precise that its effectively impossible?
Impossible is not the right word for your question. It’s not impossible, but is it worth it? Is it feasible? Does it introduce too much room for inefficiency or fraud?
Isn't the value of the land of a property [1] basically determined by the full value (land + improvement) of nearby [2] properties? Isn't that why people desire buying a certain piece of land, because of what it is connected to - close to nature, close to market, far from noise, etc.
You do have a lot of price signals about the value of nearby private property, and you can also estimate the price of public land (parks and major roads) by using data from across the whole country to determine how much people want to live close to, say, a park. Then come up with some formula to aggregate that into a single value for each land.
I would hope that someone does this at least as a theoretical exercise to see what the estimated values would be, and whether we can find a formula that is reasonable for almost all properties.
[1] I am talking about residential and retail properties. Not farm or mining type land.
[2] Close/far in both the topological and the geometric sense.
"relies on being able to determine the unimproved value of the land.
This is impossible."
1) my property tax includes 2 line items, land and improvements. So clearly not impossible.
2) it's also irrelevant. (Land value) tax just has to be plausible and reasonably consistent. Property taxes are contested all the time and are based upon interpolations and extrapolations of recent sales when possible. All that is required is that the taxing authority comes up with something that the owner pays.
It doesn't seem like it should be terribly hard to calculate the improvement in that as the article notes they were already taxing people on imputed rent.
Achievable rents would be one of the factors among others that one would use to calculate the value of a building.
I think LVT is a pretty terrible way to find government, but most states already have a property tax and assessment process. I have to pay property tax in California.
Part is a land value tax for my land, and part is a wealth tax on unrealized gains.
The tax would change the value a lot. Much less complicated to set a tax based on what one is allowed to do with the land. Taxing farms would hit mostly poor people.
This critique assumes that because land parcels are not perfectly fungible, it is impossible to determine unimproved land values with sufficient accuracy for taxation. However, this is an overstatement, and several counterpoints undermine the argument:
1. Property Tax Systems Already Differentiate Land and Improvements
Nearly all property tax systems already distinguish between land and structures. While not perfect, assessors routinely estimate land values separately using standard appraisal techniques, such as sales comparisons, income capitalization, and residual valuation.
Many jurisdictions with split-rate taxation (e.g., Pittsburgh, Harrisburg) have successfully implemented higher land taxes without insurmountable assessment issues.
2. Market Transactions Provide Usable Data
While land parcels are not perfectly fungible, land is frequently bought and sold. Vacant land transactions, teardown sales, and comparable properties provide pricing signals that allow for reasonable estimates.
Even when improvements are present, sales can still reveal land values through statistical analysis. Techniques like hedonic regression and Computer-Assisted Mass Appraisal (CAMA) leverage large datasets to isolate land value.
3. Land Value Is Already Implicit in Market Prices
The value of land manifests in rental and purchase prices. Two identical buildings in different locations will sell or rent for different prices, with the difference attributable to land value.
Land residual analysis (subtracting improvement value from total value) is a well-established method used by appraisers and economists.
4. Perfect Accuracy Is Not Required
No tax system relies on perfectly precise assessments. Income taxes, sales taxes, and corporate taxes all involve estimation and compliance issues, yet they function well enough for governments worldwide.
Even if some inaccuracies exist, a Land Value Tax (LVT) still improves economic efficiency by discouraging land speculation and incentivizing productive land use.
Geographic Information Systems (GIS), machine learning models, and mass appraisal techniques are making land assessment more precise.
Many cities and countries, including Denmark, Estonia, and parts of Pennsylvania, have successfully implemented LVT systems.
6. The Alternative Is Worse
Even if land assessments involve some degree of uncertainty, it’s still preferable to the distortions created by property taxes on buildings, which discourage construction and improvements.
Current property tax systems often undervalue land, leading to inefficiencies and speculative hoarding.
Conclusion:
The claim that it is “impossible” to determine unimproved land value is empirically false. While perfect precision is unattainable, the same is true of all tax assessments. Practical methods exist to estimate land value with sufficient accuracy, and jurisdictions that have implemented LVT-like systems have demonstrated their feasibility.
The real question is not whether it can be done at all, but whether the benefits of LVT outweigh any challenges in assessment—historical evidence suggests they do.
Would, at the very least, need a carve-out for your primary domicile + a lockout period.
It sounds rife for yucks. Imagine making a megarich person mad online and they buy your house out from under you. So you move to another one. They do it again, and again, putting you in the delightful position of playing chicken with their vindictiveness and your ability to pay your own property taxes.
Moving is colossally inconvenient, and extra horrible when you're forced to do it.
Can't imagine local supermarkets being able to endure the inevitable bidding war on their properties from walmarazon.
Good points. It still might be possible to put in limits that make it workable: Maybe the sale only goes through after 5 years and the money is held in escrow until then. Maybe an individual/organization can only exercise the option once per 20 years.
The most obvious solution to this is to do away with land ownership entirely. If the land is state owned and the value of a lease on it is auctioned to the highest paying renter, the market price of the land is precisely the same as the tax paid on it.
that seems like a super clean solution as long as nothing ever gets built, but the problem is the same: it's not a question of how much the land is worth, it's a question of how much the unimproved land is worth.
everything gets messy as soon as you start improving the land, because the improvements are not easily removed from the land. if the state owns the land but i own the skyscraper i built on it, what happens to my skyscraper when the state auctions my lease off to somebody else? the whole point of a land value tax is that the value of land changes after things have been built on it (and around it), and you need to capture that change in value.
Buildings are constructed with borrowed money. Attach said debt to the land, rather than the person building, and payments on it are a part of the lease. If you build a sky-scraper, you only pay for it while you occupy the land. Then if someone outbids you for it, they have to pay the debt.
How does this differ from an investment then? Debts attached to people can be recovered through their assets, but if the only "asset" is a temporary government lease on some land there's nothing to recover if it falls through. Furthermore, this leads to so many conflicts of interest it's not even funny, whats to stop someone from taking out a huge loan and paying their own construction company to do the work at a premium?
It can't fall through, because the debt is on the land and not a person. The land doesn't run out of money. There will always be a buyer for the building who will then have to pay down the loan. If there isn't a buyer for the building, then the building is worth less than the loan and even in the traditional case of the banking issuing the loan to an individual, there would not be sufficient recovered capital to repay the loan.
TL;DR the lender loses money iff the value of the building is less than the money they loaned to build it (true in any system).
> whats to stop someone from taking out a huge loan and paying their own construction company to do the work at a premium?
That's called embezzling. Are you familiar with corporate law? The "limited liability" in "limited liability company" refers to the fact that the owners of a company are not liable for the debts owed by said company. One can, in principal, take out a large loan on behalf of a company you own, pay it to yourself as salary, then declare the company insolvent and pocket the cash. Laws exists to prevent this and could be extended to fraud under the new system.
>even in the traditional case of the banking issuing the loan to an individual, there would not be sufficient recovered capital to repay the loan.
This just is not necessarily true, and more importantly runs into the issue that now lenders will not lend to people and land that they believe cannot be be positively valued for the entire term of the debt. In real life if you buy a home on a 30 year mortgage, the lender does not need to care much if it could be less valuable in 20 years, only that it retains enough value to be worth the rest of the loan if it needs to be seized.
As for embezzlement, go try to get a loan as a new LLC without assets, or revenue, or personal collateral. The reality is that under your proposed system, no bank would ever take the risk of loaning money to a plot of land. Forget fraud and embezzlement, there's zero mechanism for preventing flat out bad management from ruining a plot over and over again if you don't attach debts to people.
so then it isn't just that the state owns the land, but for all practical purposes the state owns the land and the buildings on it. nobody ever owns any property anymore, and all property is only leased from the state.
i don't know that i'm even necessarily against this idea, but it's a long ways from just being a land value tax at this point.
In practice, the tenant can typically break the lease with minimal consequence as the landlord (the government) can generally find new tenants, and diversifies risk over many landholdings.
That removes any benefit of confiscating all land and leasing it out while still retaining all the society destroying consequences of this idea. Basically at this point it's just a proposal to confiscate all land and resell it, for fun I guess.
This is an important part of the system. People won't construct buildings unless they feasibly believe that they can also provide the highest return on investment for said buildings and therefore pay the highest rent on the land. If you are unable to produce the greatest value with a piece of land, why should you be allowed to hoard it away from someone who can do something better with it? That said, this theory obviously falls short in practice, but equally obviously, there exist many practical remedies to that problem. Lease values can be adjusted based on the value of surrounding leases, then negotiated down by an auction if the owner believes they've grown to high, rather than periodically re-auctioned. Down payments can force bidders to internalise the cost of displacing existing business. Existing lease holders would clearly be given an advantage in the bidding process if they choose to initiate it.
Edit: I should add that the simplest way to handle this is to attach the debt used for building to the land instead of the land-owner. That way there is no loss for auctioning off the land after building.
So I rent somewhere cheap, take out massive loans, pay my brother to build a house (with a large profit), then leave the debt with the land and don't bother next year?
The LVT solution to this is valuing the unimproved land. I.e. if someone were to buldose the buildings and infrastructure actually on the land, how much would it cost to rent it.
This works fine on a lot-by-lot basis, we tend to know the value of land in cities. It starts falling when you buy a large amount of land. The value of land in Manhattan is very high obviously. Remove the empire state building and that plot is still worth a lot of money.
The value of that lot is the surrounding infrastructure - transport, power, proximity of people, etc.
If one company owned the entire island as a single lot though, the unimproved land would be very low. If two companies owned the land, the unimproved value suddenly balloons.
That still doesn't answer the question "Why would I pay to improve the land if next year someone can simply outbid me and take the land off me?". Why wouldn't this world's Blackrock swoop in the instant a profitable lease comes up for auction?
> Why would I pay to improve the land if next year someone can simply outbid me and take the land off me?
You wouldn't pay. The building would be financed by debt and the debt would be attached to the land. You'd only pay the interest on that debt for as long as you maintain the lease on the land.
> Why wouldn't this world's Blackrock swoop in the instant a profitable lease comes up for auction?
Two reasons: Firstly, Blackrock could hypothetically do this in real life but they don't because they don't have the money. And secondly, from an ideological perspective they would have no motivation to. You are still thinking in terms of "owning land" and the idea that it's better to own more land. Under the new system, you can't own land, only rent it. Hence you want to minimise the amount of land you are leasing. If Blackrock gets the lease, they will lose money on the down payment, then lose money on the rent unless they can find something productive to do with the land. Any business they set up won't be competitive as they'll be paying more in rent than the surrounding businesses. Overpaying on rent is a strictly money-losing affair.
Even if I don't pay in money, I have to pay in time and effort. Does the bank lend me that too? Furthermore, no Blackrock cannot do this in real life because in real life you are not0 _required_ to sell a thing to anyone.
Let me outline this simple hypothetical
I lease a plot of land with a parking lot on it. I spend a year of my time building a quadplex on it, financed by a loan, and get tenants in. After all my expenses, I am making 10k a month in profit! Next year rolls around, tell me what stops Blackrock from taking the lease by offering the value of the lease+9.5k per month and making their money in quantity? Down payment doesn't matter because they have so much more in assets than me that they can make use of a plot of land that generates less profit than I need to live on.
Well then Blackrock would only make $500/month in profit. This is good, you found an actor willing to pay higher taxes for the same land and thus contribute more to the public purse.
The issue is that the value for all land in all cases trends towards zero. If you start making a profit you have to give the government all or most of it, if you lose money you just abandon the plot since the debt isn't attached to you. There's zero reason to ever do anything.
This is obvious? This seems like a terrible solution.
Lease durations are important here; if you're like England and have 99 year leases instead of freeholds or something, then that's great, it's basically ownership, but you get no unit pricing signal until the next 99 year renewal rolls around.
Practically speaking, from a Georgist perspective, ths isn't even useful because it doesn't convey anything about the unimproved value. What if you make improvements to the land, like build a house or something? Do you have to knock it down when you leave? Is the government the only entity that can build houses?
What do you do when Elon Musk decides he doesn't like you and outbids you on every parcel you try to get a lease on?
Individually unit price signals are indicated as infrequently as once a century. In practice the individual units turn over more frequently, and aggregate price signals are available as other units turn over far more frequently.
On turnover, it's interesting to note that real-estate mobility has fallen markedly in the United States in recent decades. Whereas ~20% of individuals moved in any given year from 1948 through the early 1970s, that figure has fallen to about 7% in the 2020s:
That translates to a move every 5 years to roughly every 15 years, or three times the residency.
Ironically, among the factors contributing to this is being "stuck" by a mortgage, e.g., a house that's under water (more owed than the unit can be sold for), particularly in states without non-recourse ("walkaway") lending laws. This isn't the only factor, but it does contribute.
Removing private land ownership might well improve this situation.
> What do you do when Elon Musk decides he doesn't like you and outbids you on every parcel you try to get a lease on?
What does he do when he runs out of money for the down payments? Rich people can already price people out of industries in our existing economic system.
> then that's great, it's basically ownership
The entire point is to get rid of ownership, but there is no need to renew the lease at all. Just increase it based on increases in surrounding leases. Then allow the value to be negotiated down through voluntary renewal.
> What if you make improvements to the land, like build a house or something? Do you have to knock it down when you leave? Is the government the only entity that can build houses?
You build the house with debt, the debt stays with the land. If you leave, then the person who comes in after you has to pay for the construction of the house.
Rich people can price others out of industries, but in this scheme the ONLY seller is the government and sales are mandatory. In your scheme, Elon Musk can outbid you on your lease, there's nothing you can do about it, and you don't even get the proceeds from it.
First of all, this isn't a problem with voluntary renewals as I mentioned earlier in another thread (voluntary renewals is where the price is adjusted based on averages for the local area and only renewed if the owner thinks it's too high, down payments also discourage people from over-zealously renting when renewals happen). Secondly, the entire point of Georgism is that people who are under-performing are priced out of hoarding land. The only situation where it makes sense for Elon Musk to overpay on rent to force someone out of their land is when that person is a competitor of his, and this is illegal under existing anti-trust law. Otherwise, him outbidding owners on the lease is the whole point of the system.
Oh that sucks, I thought you were one of the fun homegrown econ crackpots with weird new ideas rather than just rehashing the same old insane theories.
> There is one problem, though, that I find easily explainable. Place a token at the bottom left corner of a grid that extends infinitely up and right, call that point (0, 0). You're given list of valid displacement moves for the token, like (+1, +0), (-20, +13), (-5, -6), etc, and a target point like (700, 1). You may make any sequence of moves in any order, as long as no move ever puts the token off the grid. Does any sequence of moves bring you to the target?
If someone gives you such a sequence, it seems trivial to verify it in linear time. Even for arbitrary dimensions, and such witness can be verified in linear time.
reply