It's disaster when system handling user data is showing data for another user and this is how I understand effects of https://curl.se/docs/CVE-2023-27536.html manifesting in real world. It was wrong even before privacy regulations were invented.
With prev issue (https://curl.se/docs/CVE-2022-42915.html) it seems more sophisticated to take advantage of and probably requires planting something in memory by other means, but I'm not "malware specialist", I don't know assembly well enough to judge it myself.
It's all good that some of these things can be found by libcurl authors/maintainers themselves. I'd love to see directly in CVE registries that source of CVE was internal to the project without need to lookup the reporters. If there is some feature even with lowest usage among all of them it is still a feature, and as author of library you don't have control about usage of your library. I think the GSS delegation bug well deserved initial score.
From library user side as developer I'd love to see it as critical and then write report about false positive based on library usage to pass CI checks on my side. And it's not blame for you to have race conditions, it's praise for not having them anymore.
> It's disaster when system handling user data is showing data for another user and this is how I understand effects of https://curl.se/docs/CVE-2023-27536.html manifesting in real world. It was wrong even before privacy regulations were invented.
It exposes data to user who in the first place had a right to see them (it very much evokes the "airtight hatchway", although strictly speaking it is not the same thing). Also the whole combination is incredibly niche in the first place and it is hard to see why the client code would ever want to establish second connection to the same server without GSSAPI.
In essence it is minor implementation oversight that has no practical consequences and probably the only reason why it even has a CVE number is that there is GSSAPI involved with that.
establish second connection to the same server without GSSAPI
As I understood the description, it's not even that: you would need to establish two connections, both with GSSAPI and using the same username, but with different credential scoping options (the text says with or without credential delegation, but GSSAPI has other options besides delegation and it's not clear to me if that's just an example). In such a case, the client has no control over which connection would actually be used.
> It exposes data to user who in the first place had a right to see them
I would be surprised if it is the only scenario. I've seen systems which rely on passing requests here and there with some common account auth wrapping communication to further services in very various ways and for variety of reasons. Some of the reasons to use single authentication mechanisms on gateway levels are laughable or at least cannot be treated seriously and should end up in redesign yet too many dev teams agrees on such shortcuts as their client says they don't have money for something like proper solution.
And again, minor implementation detail is not a valid argument. It is problem of people who judge another project based on unpatched high severity lib version with bug in unused feature. Companies coming seriously to security have security teams or approvals responsible for assessing impact of severe bugs. It's the other ones in huge numbers demanding smaller severity scores. For me these are like alcohol advocacy during pregnancy.
Maybe I'm missing something, but looking at concept explanation here: https://docs.oracle.com/cd/E19455-01/806-3814/overview-77/in... I would never assume both connections can be used safely in general case, yet it can be ok if business logic ok with that and is limited to process single user requests with vulnerable libcurl version.
I love what author did and I'd probably do the same, I guess I'll replicate author path. It all depends on requirements, but I also dislike python, I want to do something in python (and I've started something but a bit against main trends), but packages and recent step to get rid of signatures is a big no from my side. Too many news here and there about "enriched" versions of packages regarding python - and I know other tech stacks have it too, but looks like they have it less. I would not see it as problem if HA had dependencies statically linked, which in python terms would mean just copied fixed versions into HA repo.
Anyway what is wrong with HA/OpenHAB? It's too "enterprise" or rather too rich and therefore seems heavy to run - VM with at least 2GB of RAM? No, thanks.
What requirements I have for smart home:
- made for tinkering people - preferable approach as in open-hardware initiatives where one electric motor can be used either for water pump engine or spinning table saw, so you can use it with things you already have and you have some indications about how to connect it all together in various cases
- something designed for the cheapest hardware (nowadays rpi comes quite cheap though but it is not truly open and there are availability issues)
- something doing only what is necessary and not cloud-first
- option for build time removal of unused features
- no need for hot loaded plugins
- build based on tools with long term compatibility in mind (again, sorry custom extensions for each new language in the world)
- real time system or at least good separation of control vs deployment/etc
- designed as optional extension to existing world with switch-off mechanism
- no need for vast ecosystem as long as it is based on freely available standards
- nice to have feature is to provide scripting scenarios, can be deployed independently from main control board
- should be cross-platform on native level, so in the end going to C, no heavy vms or interpreters, nice to have runnable on bare metal
- things like reports presentation should be done not in the hardware which has actuators, data should be kept in another piece of hardware e.g. PC
Depending on your starting level they are designed in a way which is ahead of time for many small software houses environments even now.
I went through couple of mainframe z/OS basics certifications about 10y ago, never did anything commercial, just for educational purposes. All info comes from lectures and people who visited the cert-learning group with job offers.
A bit of history, mainframes are done by IBM which historically had some interesting things to do like count votes which led them into data processing market, mainframes were built for high data loads. You don't buy mainframe, you only rent it, physically. The level of duplication possible with these machines is incredible. You might receive a call, or maybe mail nowadays that something has broke in the mainframe you are using and you might be not aware it ever happened. I don't know if any "regular" vendor can tell you that e.g. half of 24 CPU board is broken down and they are going to replace it for you as this is their responsibility and would like to be sure someone can open the building with mainframe. Pro tip: you might be scared because you know you need 40 CPUs and that means you should have 2 such CPU boards and now you think you are missing 4 CPUs, but in reality you will learn that it's all ok and you will not face downtime because they can change it without switching it off and anyway you already had 3 boards just in case some failure happens and IBM does not want to send technicians back and forth multiple times a year for something as simple as this thing so they just have put more parts into your chassis and since telemetry is working all the time the know when they have to replace parts.
On the software side you can run pretty much anything fancy there, the can emulate whatever you need if only enough clients have enough $$$ and this is how they can offer you CPUs with native java support, you don't run assembly there, you run JVM bytecode directly on CPU.
And the more accurate software side, sometimes you have to deal with strange limitations, are they wrong? I don't know for sure, but I rather think not wrong. Mainframes can operate in backward compatible way and you are hostage to thinking of times when computers very in early evolving stage and world looked different. If you are writing web apps - which is pretty much kind of majority of work for devs in my experience - have you ever considered things like limits of the app you are writing? Can you estimate a number of users who can use app concurrently without measuring? Do you know upfront how much space for logs do you need? Maybe yes, but if you have to tell your operating system that you need space for 5 billions of log records and you have to provide record format with constant length strings and you have to do it upfront you might be surprised. You might be also surprised your operating system understands that your data is structured and has tools supporting you with processing such data. Log retention? This is operating system feature, you can specify that log is important for 3 such files and they should rotate, opearting system will notice when you cross boundary of 5 billion records and will create next dataset (not file, there are no files, different words for many of things). If datasets (000), (001) and (002) are full, then the oldest is removed and (004) starts its own life. Oh, did I mention you can specify these rotation only once when you create dataset :)? But only if you work in backward compatible mode which is compatible with tape drives.
There are lots of topics which I haven't covered and can relate looking at other comments, I'm not sure how much of the hardware things is really true because I couldn't verify it from software side, but on the other hand I have no reason to doubt the stories other than it being so uncommon in regular computing.
I hope I did not make up things as I write old memories.
One thing which is not really mainframe related is the work culture around them. It is just different, to me it seems like it is build for STEM enthusiasts, people who prefer to plan then execute. From business side there is a need to do something and see money coming, this is where is key difference, I don't know if Agile would work in world of mainframes but I doubt. Of course there are deadlines too, but overall approach is different. And because a lot of things get planned or are considered before deploying the app, the apps come with books of procedures. If error QWE123 happens then please follow these 15 steps. This is where mainframe operators come (mentioned in other comments), I don't know why this work should be done by people, terminal which accesses mainframe can be programmed around and used just like rest API (or pretty close), but there must be something extra in having some analog processing unit in the loop. Playing with emulators is not enough, you will invent your own ways of doing some things which is undesired feature when you work with mainframes, they are supposed to be low maintenance, so there is sometimes probaly a little space in mature environments to come with own custom solutions. But on the other hand they are source of many already existing concepts, e.g. blue green deployments but with option to continue started operations with old versions. Completely invisible rollout of new app version.
Mainframes are fascinating for many reasons and while I know some of the features can be done in regular environments I have bad luck in choosing jobs. Companies hiring for mainframes usually offer all required trainings, what I don't understand is why they don't offer well above the market salary for work which is far more cumbersome than other things. I received offer to work in capital city of neighbouring country of similar economic situation and the offer while was about 30-40% than my actual first job did not please me at all, and I could do regular dev job in my own capital city for a little less than that if I wanted to. I worked for one proxy dev contracting job for company with thousands of people around the world and somehow my local managers took care about opportunity to work with mainframe so it never happens, somehow it was seen as bad opportunity, I guess it was problematic for local office to deal with probable security measurements.
It's disaster when system handling user data is showing data for another user and this is how I understand effects of https://curl.se/docs/CVE-2023-27536.html manifesting in real world. It was wrong even before privacy regulations were invented.
With prev issue (https://curl.se/docs/CVE-2022-42915.html) it seems more sophisticated to take advantage of and probably requires planting something in memory by other means, but I'm not "malware specialist", I don't know assembly well enough to judge it myself.
It's all good that some of these things can be found by libcurl authors/maintainers themselves. I'd love to see directly in CVE registries that source of CVE was internal to the project without need to lookup the reporters. If there is some feature even with lowest usage among all of them it is still a feature, and as author of library you don't have control about usage of your library. I think the GSS delegation bug well deserved initial score.
From library user side as developer I'd love to see it as critical and then write report about false positive based on library usage to pass CI checks on my side. And it's not blame for you to have race conditions, it's praise for not having them anymore.