> Why doesn't the README file explain what this repository is doing?
It explains exactly what it's doing.
"Microsoft Store package for Windows LTSC."
It provides a Microsoft Store package for LTSC builds, and an install script that allows it to actually work. Windows LTSC builds don't have Microsoft Store preinstalled, and Microsoft offers no official way to re-enable it.
> Windows LTSC builds don't have Microsoft Store preinstalled
No, it's not that it isn't "preinstalled", the Microsoft Store is literally not supported on LTSC, by design. LTSC was never intended to run the Store. The original use case for LTSC was for ATMs, industrial control equipment, hospitals, and the like, where IoT wasn't appropriate, where you needed the ability to run full desktop applications.
> Microsoft offers no official way to re-enable it.
Yeah that's because the Store was never supposed to run on LTSC. It's not supported. Why would they offer an official way to re-enable it? The whole point of LTSC is that it doesn't include the store.
If someone cobbled together an ugly hack to shoehorn it in, by definition it could break at any time.
If by "customer" you mean "way of making money", I agree, since I didn’t pay for it. OTOH, I have been running LTSC on my desktop for years because it's the best edition of Windows, and I haven't had any issues with the Store, which I had to install manually, thus far.
To be fair, the headline could have been better worded. The convention for something like this is
“Show HN: Title of Repo”
I could understand how one might not understand what the aim of this post was. Maybe the ensuing conversation could have been handled better, but I would certainly include the parent comment in that indictment.
The store is also an app on windows and is sometimes an hard dependency to install apps that only exist on the windows store without having to jump through many hoops. It's usually part of windows itself in the regular retail builds of windows, but LTSC which is meant for enterprise and embedded system does not include it. Installing it is not straightforward which is what this repo provides.
There's no source code, it's a just a bunch of binaries and an install/uninstall script.
Edit: I should clarify that the link provided in the repo is not the microsoft store that the apps refer to. This would be a better link https://apps.microsoft.com
I don't think there's anything nefarious going on here but to someone just quickly looking over the page it has the impression of being an official Microsoft project, given the gratuitous use of their trademark and zero mention of it being a "community" effort.
> The lack of support on LTSC is the least baffling thing going on here but I'm open to the possibility that I'm misunderstanding something....
And yea, you're right, but Indeed, many people need to use the store on LTSC, especially after Microsoft migrated many ecosystem attempts to the store, for example Microsoft Photos and some extensions like HEIC, and now not only UWP applications can enter the store; regular applications can also do so. It actually poses a very big problem that we cannot use the store anymore, at least that's what I think.
Furthermore, it is not just LTSC 2019 that cannot be used; this means that older versions of Windows (at least 1809 or older) are also no longer able to use it. In other words, we can no longer use the store on older versions of Windows. You might say that Microsoft itself didn't intend to provide support for older versions, and yea, I agree, that's true. However, the fact is that many people use Windows largely because of its compatibility advantages. I believe everyone should at least be aware that Microsoft is not as compatible with older programs, especially its own, which is what I want to express.
As for the license, I would like to clarify that it is only to prevent the packaging scripts from being used for commercial purposes and promotion. As you can see, this repository is not specifically intended for hosting store programs, so it does NOT apply to the store programs themselves, but only to the deployment scripts :)
I agreed with a lot in the article, but I was a bit baffled by the DEI name-drop in the opening.
> "... the guys who had big tech startup successes in the 90s and early aughts think that 'DEI' is the cause of all their problems."
Who is the author referring to here?
(I realize that DEI has been rolled back at some companies, and Zuckerberg in particular has derided it, yet I still feel like the author is referring to some commonly accepted knowledge that I'm out of the loop on.)
certainly Andreessen has gone on many public rants about how he thinks dei and other "woke" initiatives are killing american tech innovation. Here's one interview - https://www.nytimes.com/2025/01/17/opinion/marc-andreessen-t... About halfway through he really lets it fly
Suppose user U has read access to Subscription S, but doesn't have access to keyvault K.
If user U can gain access to keyvault K via this exploit, it is scary.
[Vendors/Contingent staff will often be granted read-level access to a subscription under the assumption that they won't have access to secrets, for example.]
(I'm open to the possibility that I'm misunderstanding the exploit)
My reading on this is that the Reader must have read access to the API Connection in order to drive the exploit [against a secure resource they lack appropriate access to]. But a user can have Reader rights on the Subscription which does cascade down to all objects, including API Connections.
But also the API connection seems to have secret reader permissions as per screenshot in the article… Giving secret reader permission to another resource seems to be the weak link.
The API Connection in a Logic App contains a secret in order to read/write (depending on permission) a resource. Could be a Key Vault secret, Azure App Service, Exchange Online mailbox, SharePoint Online site..., etc.
The secret typically is a user account (OAuth token), but it could also be an App Id/Secret.
But somebody gave the API Connection permissions to read the KV secrets from, Exchange Mailbox, SharePoint folder etc… And anybody who has access to the API Connection now has access to the KV, SharePoint folder, etc… I do not think this is a problem with Azure, this is just how permissions work…
The API Connection in the example has permissions to read the secrets from the KeyVault -as per screenshot.
It seems to me the KeyVault secret leak originated when KeyVault K owners gave secret reader permissions to the API Connection. (And I will note that granting permissions in Azure requires Owner role-which way more privileged than the Reader role mentioned in this article.)
[edit - article used Reader role, not Contributor role]
> it's not a data breach for the government to have access to government data
This absurd oversimplification needs to be called out.
The 'government' is not a single individual, nor should 'government data' be treated without regards to specifics.
The exact entity doing the accessing, and the exact data that's being accessed, all need to be accounted for, and the appropriateness of the access will change depending on the context.
DOGE hasn't been transparent in any of this, which is my chief complaint at the moment.
Totally disagree, I've used KQL for about 10 years now, and SQL for 20. Given the choice, I'll always prefer KQL.
Sorry, I don't have time for a thorough rebuttal of all the topics mentioned in the link you provided, but if I had to bring up a few counterpoints:
1. (Can't be expressed) KQLs dynamic datatype handles JSON much better than SQLs language additions.
2. (variables/Fragile structure/Functions) KQL fixes many of the orthogonality issues in SQL. (Specifically: both variable assignments and function parameters can accept scalar and tabular values in a similiar way, where-as SQL uses different syntax for each)
> How can this be possible if you literally admit its tab completion is mindblowing?
I might suggest that coding doesn't take as much of our time as we might think it does.
Hypothetically:
Suppose coding takes 20% of your total clock time. If you improve your coding efficiency by 10%, you've only improved your total job efficiency by 2%. This is great, but probably not the mind-blowing gain that's hyped by the AI boom.
(I used 20% as a sample here, but it's not far away from my anecdotal experience, where so much of my time is spent in spec gathering, communication, meeting security/compliance standards, etc).
For a large portion of my career, the dashboarding solutions I've worked with have followed a similiar model: they provide a presentation layer directly on top of a query of some kind (usually, but not always, a SQL query). This seems like a natural next step for organizations that have a lot of data in one spot, but no obvious way of visualizing it.
But, after implementing and maintaining dashboards/reports constructed in this way, big problems start to arise. The core of the problem is that each dashboard/report/visual is tightly coupled to the datasource that's backing it. This tight coupling leads to many problems which I won't enumerate.
Power BI is great because it can provide an abstraction layer (a semantic model) on top of the many various data sources that might be pushed into a report. You're free to combine data from Excel with msSql or random Json, and it all plays together nicely in the same report. You can do data cleaning in the import stage, and the dimension/fact-tables pattern has been able to solve the wide variety of challenges that I've thrown at it.
All this means that the PowerBI reports I've made have been far more adaptable to changes than the other tightly coupled solutions I've used. (I haven't used Tableau, but my understanding is that it provides similar modeling concepts to decouple the data input from the data origin. I'm not at all familiar with Looker).
[disclaimer, I'm a Microsoft Employee (but I don't work in Power BI)]
The problem of semantic models what I've seen in tools like Looker, Tableau and Qlik (very probably same for PowerBI) is that they are tightly coupled to the tool itself, work within them only. Now you want "modern data system" then you want them decoupled and implemented with an open semantic model which is then accessable by data consumers in Google spreadsheets, Jupyter notebooks and whatever BI/Analytics/reporting tools your stakeholder uses or prefers.
There are very new solutions for this like dbt semantic models; their only issue is that they tend to be so fresh that bigger orgs (where they do make most sense) may be shy on implementing them yet.
To the original topic - not sure how much PG17 can be used in these stacks, usually much better are analytical databases - BigQuery, Snowflake, maybe Redshift, future (Mother)Duck(db)
The semantic model in Power BI is not tightly coupled to the tool. It is an SSAS Tabular model. It is pretty trivial to migrate a Power BI model to Analysis Services (Microsoft's server component for semantic models, hostable on-prem or as a cloud offering).
Both Power BI and Analysis Services are is accessible via XMLA. XMLA is an old standard, like SOAP old, much older than dbt.
XMLA provides a standard interface to OLAP data and has been adopted by other vendors in the OLAP space, including SAP and SAS as founding members. Mondrian stands out in my mind as an open source tool which also allows clients to connect via XMLA.
From what I can see, dbt only supports a handful of clients and has a home-grown API. While you may argue that dbt's API is more modern and easier to write a client for (and I'd probably agree with you! XMLA is a protocol, not a REST API), the point of a standard is that clients do not have to implement support for individual tools.
And of course, if you want API-based access there is a single API for a hosted Power BI semantic model to execute arbitrary queries (not XMLA), though its rate limits leave something to be desired: https://learn.microsoft.com/en-us/rest/api/power-bi/datasets...
Note the limit on "one table" there means one resultset. The resultset can be derived from a single arbitrary query that includes data from multiple tables in the model, and presented as a single tabular result.
Note: XMLA is an old standard, and so many libraries implementing support are old. It never took off like JDBC or ODBC did. I'm not trying to oversell it. You'd probably have to implement a fair bit of support yourself if you wanted a client tool to use such a library. Nevertheless it is a protocol that offers a uniform mechanism for accessing dimensional, OLAP data from multiple semantic model providers.
With regard to using PG17, the Tabular model (as mentioned above, shared across Power BI and Analysis Services) can operate in a caching/import mode or in a passthrough mode (aka DirectQuery).
When importing, it supports a huge array of sources, including relational databases (anything with an ODBC driver), file sources, and APIs. In addition to HTTP methods, Microsoft also has a huge library of pre-built connectors that wrap APIs from lots of SaaS products, such that users do not even need to know what an API is: these connectors prompt for credentials and allow users to see whatever data is exposed by the SaaS that they have permissions to. Import supports Postgres
When using DirectQuery, no data is persisted by the semantic model, instead queries are generated on the fly and passed to the backing data store. This can be configured with SSO to allow the database security roles to control what data individual users can see, or it can be configured with security roles at the semantic model layer (such that end users need no specific DB permissions). DirectQuery supports Postgres.
With regard to security, the Tabular model supports static and dynamic low-level and object-level security. Objects may be tables, columns, or individual measures. This is supported for both import and DirectQuery models.
With regard to aggregation, dbt seems to offer sum, min, max, distinct count, median, average, percentile, and boolean sum. Or you can embed a snippet of SQL that must be specific to the source you're connecting to.
The Tabular model offers a full query language, DAX, that was designed from the ground up for expressing business logic in analytical queries and large aggregations. Again, you may argue that another query language is a bad idea, and I'm sympathetic to that: I always advise people to write as little DAX as possible and to avoid complex logic if they can. Nevertheless, it allows a uniform interface to data regardless of source, and it allows much more expressivity than dbt, from what I can see. I'll also note that it seems to have hit a sweet spot, based on the rapid growth of Power BI and the huge number of people who are not programmers by trade writing DAX to achieve business goals.
There are plenty of pain points to the Tabular model as well. I do not intend to paint a rosy picture, but I have strived to address the claims made in a way that makes a case for why I disagree with the characterization of the model being tightly coupled to the reporting layer, the model being closed, and the model being limited.
Side note: the tight coupling in Power BI goes the other way. The viz layer can only interact with a Tabular model. The model is so broad because the viz tool is so constrained.
Thanks for the writeup. There are indeed use cases, especially in the MS multiverse. Proof of the none->basic->complex “can do everything” (soap,xml,rpc)->radically simpler “do what really matters” (rest, json, markdown) path. I’m not really sure if dbt semantic layer is the final open “standard” for the future analytical models and metrics, it has own questionmarks, it is literally just a transformer with metrics as addon and there are just initial implementations, but today I’d rather give that thing a try. Simpler is so much better
My Org has been heavily pushing us to use Power BI, I have found it has a lot of pain points.
I expect how useful it is depends a lot on your use cases. If you want to throw a couple of KPI's on a dashboard it fulfils that requirement pretty well but if you want to do any analytics (beyond basic aggregations like, min, max and median) or slightly complicated trending (like multiple Y axis) Power BI is painfully complicated.
"The age of the oldest trees is not certain, but 100 rings have been counted on a downed loblolly pine and a downed chestnut oak. There is no old-growth forest in this park, however, the strong protections put in place on this forest ensure that it may recover in time."
https://www.oldgrowthforest.net/va-james-river-park-system
(The exact definition of old-growth isn't agreed on, but I've seen some foresty documents in the PNW that demand tree age of 400+ years as a prerequisite for the old-growth categorization)
I had always thought of an old-growth forest as less defined by the age of the trees, and more by the lack of human management/disturbance. I think of old growth as being in comparison to a second-growth forest, which had been logged and replanted. I'm not a forestry scientist, just cobbling definitions together as I go. But it seems unfair to say you could never have an old growth forest of aspen—a species where individual trees only live about 100 years—if the forest itself had been untouched since the dawn of time.
It's not about the age of the individual trees but the overall forest since it was last disturbed by humans (on a large scale like logging or agriculture).
Age matters because there are several "thresholds" during which the diversity of species significantly increases and that starts with the older trees falling and decomposing. The conifers in the PNW take anywhere from 50-150 years to decompose before they start breaking apart and littering the forest floor. Once the first generation of trees is broken up and spread around by the wildlife and the fungi species are fully established, the growth of third/fourth/fifth generation of trees and plants become a lot more vigorous. When the second and third generation of trees start falling, they end up creating the dense habitats that supports large food webs from the rodents on up. All the while, the tree roots pull nutrients from deeper and deeper in the earth, allowing the rain and cold to build up a layer of peat on the forest floor that cycles and stores the nutrients, improving the quality of the dirt.
It just takes a while for all these processes to build up.
Right! I guess they are exceptions because we cut them before eldery, not because biological reasons. Cows older than 4yo are an exception because we slaughter most of them before but they could live 5 times more.
> Cows older than 4yo are an exception because we slaughter most of them before
Huh? A heifer generally won't have her first calf before two years of age. If you are only getting two productive years out of your typical cow, you're doing something horribly wrong.
> Meat or beef cows live for 1.5-2 years in the commercial beef industry. However, the natural life of beef cattle is between 15-20 years. Heifers and cows (female cattle) often live for between 5-6 years as they breed to produce the next generation of beef cattle.
Todays milk caws are bred for producing a LOT of milk (30/60L/day), in comparaison those not bred for that produce 4L/day. However this quantity does not keep vealing after vealing and they became unprofitable soon.
> think you misread. Beef caws (male) usually live 1.5-2 years
I'll assume I misread "caw". But this apparent multi-sexed beef cattle beast you now speak of is even stranger still.
> What do you mean by wrong ? These are the figures of modern practice.
Under modern practices, the productive lifespan of a cow is usually around 4-5 years, as echoed by your links[1]. If you're only getting two out of your typical cow, something is afoot with the management of your herd. If you are culling them before their productivity has been exhausted, how are you squaring the resources you put into their unproductive early years? In other words, how the hell is your farm managing to stay profitable if you find a cow older than four to be abnormal?
[1] Which was strange in its own right. Can you not speak to the subject yourself?
Thanks to point that out, my English isn’t perfect and a quick search show me that a caw is a female :-)
> speak to the subject yourself
I wasn't explicit enough here neither: "we slaughter [...]" -> "Humans slaughter [...]". I'm not farmer, just some random guy.
For the profitability part : "The value of cull cows at slaughter represents a source of income for dairy farms."[1]
However the profitability incentives varies depending on the breed and lead to ...strange... practices: the male newborn are hardly profitable if from a diary-super-optimized breed and some farms culled them at birth [2]. It's now forbidden in UK but still a practice in Switzerland (yummy Toblerone).
> Does the US invest in manufacturing like this anymore?
It might be easy to argue over the exact degree of similiarity, but I'd argue that the US has repeatedly made manufacturing investments since the 1950s. Buried in bills signed into law, you'll find such investments.
Recent examples include the Recovery Act of 2009 or the American CHIPS act of 2022.
Why doesn't the README file explain what this repository is doing?
OP, what did you hope to accomplish with this submission?
The lack of support on LTSC is the least baffling thing going on here but I'm open to the possibility that I'm misunderstanding something....