Hacker Newsnew | past | comments | ask | show | jobs | submit | fultonfaknem42's commentslogin

Confusion. You're experiencing confusion.

I get the exact same thing.


They're totally worth it.


I can't help but thinking about the 5 monkeys and a ladder.

https://www.wisdompills.com/the-famous-social-experiment-5-m...

(Even if it is bullshit)


You could do the math if you look up the half-life of caffeine. Via a quick Google search it looks like it is 4-6 hours. So if I understand it correctly, it's 4-6 hours for 100mg to become 50mg, then another 4-6 for 50mg to become 25mg, etc. So like if all you did was have a single cup of 100mg coffee, then its like 12-18 hours for it to go away. If you're me though, you drink like 3-4 a day. This seems to be about 28 to 42 hours. Withdrawals could take up to 2 weeks in my experience, and are proportional to how heavy my intake was.


That tells you how long before it's out of your system, not how long it takes to reset after.

Effect duration for a psychoactive is related to half-life, but they aren't identical, and 'reset time' varies widely, e.g. MDMA is over in about 4 hours but it takes at minimum of two weeks before one can get a comparable effect again.


You'd have to look at things like the rate of downregulation of the appropriate brain components that caffeine affects.


I would love to produce my own ClearView.


They have obstacles to overcome, and then there is just the glaring fact that they had so much momentum with Win32.

My biggest gripe is that startup times are just too long. So many times I've gone to launch something like calculator only to have a beefy machine hang for almost a second, if not a full second. It's likely an engineering obstacle that can be overcome but it just makes UWP look like a bad move perf wise.


There was a brief period in Windows 8 where (proto) UWP apps launched smooth and fast and the Win32 desktop took several additional minutes to launch. With so many stacks living side-by-side on Windows it seems an interesting trade-off between past apps, present apps, and future wishes.


I hope future wishes don't involve adding yet another stack though. They need to fix the stuff they have (without breaking them), not introduce new stuff.


That's part of my sadness in seeing so many posts here talk bad about the UWP. The UWP barely had half a chance to live, and with Microsoft having to spend so much time in the Chromium codebase and working on Android code, with Electron and React Native seemingly becoming the most common development platforms for them, the future for Microsoft already doesn't seem to be UWP.


> They have obstacles to overcome, and then there is just the glaring fact that they had so much momentum with Win32.

Do you feel the same about the macOS Carbon-Cocoa transition? Nobody misses Carbon anymore, and Win32 is as old as the classic Mac Toolbox.

Eventually they have to shed the old API.


Major Win32-only applications like Microsoft Office, SAP and countless in-house applications written over the last 25 years are the reason that many companies use Windows.

Microsoft understands that the Windows API "is so deeply embedded in the source code of many Windows apps that there is a huge switching cost to using a different operating system instead".[1] If they force companies to incur that cost by dropping Win32 support, that gives companies the chance to make their application cross-platform at a much smaller additional cost, and shed their reliance on Windows.

Windows licensing fees are still quite a large chunk of Microsoft's revenues, so I think that they will not make this move any time soon.

1. https://www.zdnet.com/article/microsoft-wed-have-been-dead-a...


Win32 support will be there until Microsoft's desktop operating system is no longer called "Windows."

The comparison between Apple and Microsoft vis-à-vis backward compatibility is illuminating, but not in the way you seem to believe it is.


Cocoa was an improvement though, while UWP is a regression and a massive PITA to work with. Win32 isn't a great API either, and some corners are downright ugly, but when a successor needs more lines of code to get a window on screen but only delivers a small fraction of the features of the old system, something went seriously wrong.


I can't answer your question exactly. I came around only after Cocoa was established.

This does make me wonder if the transition from Carbon to Cocoa is analogous though. Win32 comes off as more of a "functional" approach to programming (C) and UWP is looking more object oriented (as to C++). It makes me wonder what kinds of conceptual advantages are being brought on board by moving from win32 to UWP. I think that conceptually I am in favor of the transition, and I'd wager that all the issues that arise are common to a functional to object-oriented transition.

But that real questions: What are the benefits? Is it conceptually easier to grasp thus promoting more developer interaction? Are we expecting speed benefits (are the performance losses expected and in range?) Is it a false dichotomy to only look at Functional vs OO? Maybe a context-oriented data approach is more appropriate?

Dunno.


Win32 is fast and reliable.


And frozen in a Windows XP vision of the world.


And Wine. Don't forget Wine.


It looks like no one even cares about the drama of this.


Probably the wrong crowd? This is a forum for developers and such, many people here wouldn’t look at a website builder with their ass.


That is interesting and counter to my experience. Consider looking in to alternative causes (uninstalling plugins that may not be playing nicely with your particular version, background updates, etc.)


Given the new mission statement of Microsoft, I think if the community is more vocal about the things they want to accomplish it will become imperative that people at the company will be assigned to making those desiderata a reality. So if you want more visual tooling, find a way to become more vocal about it. If you want more support for this or that tech fad, make it known.


The community have been very vocal on a number of issues and thoroughly ignored and steamrolled over. Or the direction changes once the desired outcome is established.


I still can't believe more people aren't leveraging the Roslyn APIs to write compiler extensions or additional tools around C#. It's conceptually powerful.


Personally speaking, I'd love to, but I'd then have to figure out how to make them work with an IDE. I'm very tired of waiting for record classes to show up and I could have written a (to be clear: inferior) set of stuff around regular classes that magics one into "everything is readonly and we autogenerate a `copy` method", much like Kotlin does for its data classes...but my IDE isn't gonna understand it unless I do a lot more work, and so I never bothered.


I have a code-gen for records and discriminated unions [1] (as well as other useful features). It will generate a record-type at build-time (which includes the background build process in VS). All that's needed is a [Record] or [Union] attribute:

[1] https://github.com/louthy/language-ext/wiki/Code-generation


You just made my day. Not kidding. Thank you.


I'm not sure that it would be that difficult with Roslyn and Visual Studio these days. There are Roslyn-based syntax-highlighters and linters and snippet-generators and code-transformers. I use one for making sure all of my code is not just formatted the way I want it, but auto-inserting "readonly" modifiers for fields that don't get re-written outside of the constructor, one that treats not implementing IDisposable correctly as an error (it can also track the lifetime of a Disposable object and warn when it detects that it never gets disposed, which is super cool), and another one that rainbow-highlights code blocks. The tooling is there to support just a thing, it just needs someone to put all the pieces together.


Hey, that's cool. Sounds like things have really improved. Maybe next time I get back into C# I'll see about writing the gizmo I want, if somebody else hasn't already. Thanks a lot.

(If somebody has--well, I wouldn't mind some lazyweb recommentations...)


What is the name of that IDisposable checking one? That sounds very, very useful.


I think the easiest way to set it all up is to manually edit your CSProj files, especially if you are still building .NET Framework projects. By default, Visual Studio will only create the new SDK Style project file format for .NET Standard and .NET Core projects, but it's still usable for .NET Framework projects if you manually change the format. Once you change it, it sticks, so you can use VS to edit the config after that, but it's still pretty easy to edit by hand now.

So here is my base project config: https://github.com/capnmidnight/Juniper/blob/master/Juniper....

The most important part is the first PropertyGroup sets values for all build configs, in particular is setting LangVersion to 8.0. Framework 4.8 taps out at C# 7.2, but you can use most of the C# 8.0 features, including fully async streams if you manually set the language version. Features that aren't available are some minor things like the array ranges and indexing: https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...

And here are my base project with the analyzers I use: https://github.com/capnmidnight/Juniper/blob/master/Juniper....

They're all ones provided directly from Microsoft, though there are a bunch more from other vendors: https://www.nuget.org/packages?q=analyzer

Then here is an example project using that targets file: https://github.com/capnmidnight/Juniper/blob/master/src/Juni...

You can see just how much the new SDK Style project file format simplifies things. There is no importing of any base Targets files hidden deep in Visual Studio's install directory anymore.

I manually import the .props and .targets file instead of using Directory.Build.props and Directory.Build.targets because I have other projects that use these configs, included via a git submodule.

Here is my .editorconfig file, where I set most of the rules related to Disposable types to errors: https://github.com/capnmidnight/Juniper/blob/master/.editorc...

And this Visual Studio extension makes .editorconfig files a lot nicer to work with: https://marketplace.visualstudio.com/items?itemName=MadsKris...

(BTW, I pretty much install all of Mads Kristensen's extensions)

And while I'm here, I'll give a shout-out to Viasfora for its syntax highlighting modifications that rainbow-highlight code blocks: https://marketplace.visualstudio.com/items?itemName=TomasRes...

And VSColorOutput for making the Output window in Visual Studio actually readable: https://marketplace.visualstudio.com/items?itemName=MikeWard...


5 years ago i wrote a C# code analyzer that found/suggested and fixed async versions of EntityFramework extension methods in your async functions.

Example: if my function was async and it had a .ToList() function call inside it would suggest to use .ToListAsync() and with a click auto fix it in your function

here is the repo for reference :

https://github.com/aviatrix/YARA

It took me couple of days to wrap my head around all of the new concepts, but it was quite fun! To get it integrated with the IDE, you need "CodeAnalyzer" class, and that gets executed automagically and provides annotations in the IDE :)


We've written a custom compiler from C# to Java and JavaScript based on Roslyn. Over time it gained more features and target languages as well (Python is in progress, we can also emit a working GWT wrapper for the JavaScript output, we can emit TypeScript typings or just normal TypeScript as well, etc.). For us this helps us in offering our products on various different platforms without having to write the code in different places anew. Since our product is a library, not an application, we couldn't really take advantage of existing conversion tools that mostly take the path of converting IL to hideous code and an entry point.

The whole thing is now used in basically every library build we have at some point, even for the C# versions, as it ties in with our documentation writing process and places the correct API names and links for that product into the documentation, even though the docs start with mostly the same content for each.

I agree that lack of documentation makes working with Roslyn a bit daunting at times, although the API is very well designed and oftentimes it's very obvious where to look for something. I was also very impressed by their compatibility efforts. We started while Roslyn was in beta and upgrading through the releases worked without a hitch.


I'll second the API being very well designed. It is incredibly legible from a technical perspective and it has actually been generally a joy to figure it out, as opposed to just being told how it works. That being said: I need some bathroom friendly reading material every now and again.


I looked at bit when I was in preview. It’s indeed powerful but also quite verbose and it distinguish between a lot of concepts, so there is a steep learning curve. Then there was the lack of examples beyond a few blog posts.


Oh man, the verbosity thing was a huge problem until C# got "using static". Any static classes that just hold static methods can be imported into your code module as bare functions. I frequently do "using static System.Math;" and "using static System.Console;" when tossing together little mathy processor apps.


Agreed. I have a few ideas and it would be nice to see more examples. Right now it’s hard to make sense of things without spending a lot of time on it.


At some point I wrote a barebones scripting system using roslyn for a project of mine. At runtime I got a piece of code compiled to a DLL in memory and then executed this DLL; Worked well; But at that time there was no support for destroying AppDomains or something, don't remember the exact name. Still pretty fun; But yeah, there's no real complete documentation anywhere; And the DLL hell was real. Dozens of DLL's just to support this; But now with .NET Core 3+ things must have improved a lot;


Roslyn is very powerful, but it is still pretty cumbersome. At one point I spent quite a while trying to get a game scripting system akin to the way that Lua is commonly used working, and I just couldn't get it working fast enough to be viable. I essentially wanted to have a core C# engine that provided services, and then have it call an initialization function and a gameloop function that were defined in designated script files, and then all of my game code would also be written in other scripts; this way I could run things, edit the code on the fly, and hot-reload.

It's entirely possible that I just don't know what I'm doing well enough to do this correctly, but I just couldn't get it to do the kinds of things that I wanted from it. My sense is that it is really great for injecting custom code that is used rather infrequently. I've had good results using it for building out reporting systems that are pluggable.


I think ignorance is a pretty powerful argument to be made by any developer wanting to use Roslyn. The last few times I messed with it there was literally no documentation and everything I knew/know comes from reading headers, trial and error, and experimentation.

I feel defensive about calling it cumbersome though, and I can't imagine why something like your LUA vision isn't possible (though, I've never tried I just assumed someone would inevitably do this). For example, if World of Warcraft were to switch out their UI LUA extension system with C# I could totally imagine this being possible (though it'd be suicidal for their mod community). Likewise, if Unity were to begin using it for this kind of thing (if they don't already) I'd imagine it is possible.


We have some documentation now: https://docs.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/

would really appreciate bugs/comments on these docs pages of what else you would like to see.


Well, it's not cumbersome at all. The problem is really lack of real documentation and most importantly real world examples; About the performance, if you use the real scripting support for Roslyn , yeah, I think you'll have bad performance. But if you compile the code to memory at runtime and execute it like I did, it's pretty fast;


FWIW, I wrote effectively what you describe pre-Roslyn, using the old CSharpCompiler, and it was plenty fast enough for my own developer tolerances. I ended up going to a compile-on-startup mode instead, though.


DLL Hell doesn't refer to "lots of DLLs", it refers to conflicting versions of DLLs not being easily manageable across multiple applications. Outside of putting assemblies in the GAC (which was always a hack of last resort anyway), .NET has never had "DLL Hell".


Yeah, the closest .NET equivalent is Binding Redirection hell, and NuGet and Visual Studio and .NET Core have spent years of work making that better than it was at its worst in the early NuGet days. There's a couple of low level APIs that are still annoying problems to redirect if you need to support particular sets of .NET Framework and .NET Core, but beyond that a lot of it is managed for developers automatically these days.


Yeah, the VS interface for managing it is not great, but the new project file format is a lot easier to understand and manage, so I've had good luck just making manual changes.


> .NET has never had "DLL Hell"

I take it you've never worked on a .Net solution with projects targeting both full framework and Core framework.

Core v1.x stuff was a nightmare - haven't had so many issues with versioning in 20 years. Core v2.0 was still pretty bad but each v2 point release made decent strides - and specific packages would get updated out of band at times to fix issues.

But to say .Net has never had DLL Hell is just wrong. Even pre-Core you could run into difficult situations with conflicting downstream dependencies of directly used packages.


DLL Hell had nothing to do with project development. It was a problem of application deployment and running applications with DLLs in shared locations.

https://en.wikipedia.org/wiki/DLL_Hell

  The problem arises when the version of the DLL on the computer is different than the version that was used when the program was being created. 
Other than the GAC, which was never recommended for use anyway, .NET has never had DLL Hell


GAC was surely the recommended way until .NET 4.0, when the location changed.


No, the recommended way was to install your application with all dependency DLLs in the application install location.

And I've misspoken about GAC causing DLL Hell for .NET. It fixed DLL Hell, but introduced a new Strong Naming Hell.


Initially that recommendation only applied to native code DLLs, not managed ones, as far as I can remember.


Yeah, lots of assemblies is kinda like .NET dll hell ;) .NET Core had this problem in earlier versions. Not anymore;


Maybe Assembly unloading, in which case if you wanted to swap out new dlls in memory you'd have to tear down the process and restart it. Definitely not sexy. That being said: I think Assembly Unloading is a thing now (and AppDomains don't exist in .NET Core last I heard).


In .NET standard (i.e. not .NET Core) you can load and unload assemblies without tearing down whole processes by creating AppDomains within a process. You then load your desired assemblies into these app domain(s), consume and when you need to load say a newer DLL version you just tear down the AppDomain and create a new one for the new DLL's.

It's a feature that's been around since .NET 1.1 and I used to use heavily 10+ years ago.

https://docs.microsoft.com/en-us/dotnet/api/system.appdomain...


Yeah, if I remember correctly it's indeed a thing in latest versions of .NET Core;


This looks like a pretty cool demonstration of what we're talking about: https://www.strathweb.com/2019/01/collectible-assemblies-in-...

See the section titled: Collecting a dynamically emitted assembly


Yeah that's exactly it. Thanks!


From my perspective I’d love to but from writing c# for nearly 20 years now it’s a rough ride for anyone trying to keep up with the platform. Knowledge is thrown in the trash faster than you can learn it and direction changes of all sorts jump on you. I’m not in for investing my time in that any longer.


I don't think that's true. If you're familiar with functional programming concepts- nothing in C# is really that new. In my experience, C# is just getting easier to use as they add more language support for things you'd normally have to implement on your own.


The core language is fine but the surrounding ecosystem is volatile.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: