If you can afford losing a bit of frequency stability, you can use a varicap instead and control it by voltage. Precise multiturn potentiometers are much cheaper. Or just buy a programmable clock signal generator based on PLL and then make it into a superhet (you can still have analog filtering and detection, but digital frequency synthesis so it can look more like a modern radio with frequency display).
There are so many options and all are very cool to explore.
Yes, varicaps are good and there are even varicaps with very large capacitance swings exactly for this purpose.
But it does require a very steady voltage and many more parts than just a single passive to not accidentally load the resonant circuit to the point that it becomes ineffective. You need to de-couple considerably to make this work, especially without injecting (phase) noise. And frequency stability and phase noise can be extremely annoying in particular applications.
That’s why you probably want to use it in an oscillator rather than a filter. Then some of those problems go away. Although, some new ones appear like having to add a mixer …
Some time ago I thought making a double superhet would be too complex but apparently it turned out quite a nice DYI project.
One of the problems I'm struggling with is that I'm trying to recover a signal with a known pattern from under the noise floor and every little bit helps. Varicaps will come into play once there is enough signal that the thermal noise and power supply influence would no longer drown out the signal. Interesting project but it is one of those where when you start you think 'how hard could this be?' only to find out that it is in fact pretty hard. We'll see if I can pull this off or not, I give it 10% chance at the moment.
Oh, and superhet wouldn't work, that would destroy the valuable part of the signal.
What do you mean it would destroy the valuable part of the signal?
If the signal has a carrier (like in AM or FM) or some other regular pattern (e.g. digital signal) then the typical the way to recover it from under the noise floor is to use a narrow-band PLL.
I’m experimenting with it right now and just found that a quartz/ceramic oscillator pulled by a varicap controlled by a PLL gives pretty good results - low phase noise, good noise rejection and ability to recover the carrier from a very noisy signal. But for that to work, the signal has to be shifted to a constant frequency through heterodyning first.
The exact phase of the input is what I'm after. Using a LO would cause you to read the mixture of the LO and the input signal rather than just the input signal, and that means that you will never see the phase with the same precision as if you were to observe it directly because the two will have slightly different frequencies.
If you were to put both on a scope in XY mode you'd see the phase change directly over time.
But I like your two step approach, use the het to get close and then lock on to the phase.
So far I've not been successful without first bringing the signal up to the point where I can do it directly and then I don't need to add the complication of a PLL.
Injecting various levels of noise into the signal is a good test to see if the system would still work in less than ideal conditions and so far the answer is a hard 'no', it may well be that I'm past my level of competence on this but it's only been a couple of weeks so I will keep trying.
One possible approach is just to use a flash AD and move the whole thing into the digital domain. That has a bunch of other advantages as well, the signal is not particularly high frequency so that should be possible without breaking the bank.
> Which is why when folks nowadays say "you cannot use XYZ for embedded", given what most embedded systems look like, and what many of us used to code on 8 and 16 bit home computers, I can only assert they have no idea how powerful modern embedded systems have become.
Yet, I still need to wait about 1 second (!) after each key press when buying a parking ticket and the machine wants me to enter my license plate number. The latency is so huge I initially thought the machine was broken. I guess it’s not the chip problem but terrible programming due to developers thinking they don’t need to care about performance because their chip runs in megahertz.
There's no pressure to make a good product because nobody making this decision has to use the machine. Everywhere I've worked purchase decisions are made by somebody with no direct contact to the actual usage, maybe if you're lucky they at least asked the people who need the product what the requirements are, otherwise it's just whatever they (who don't use this product) thought would be good.
"Key presses are 15x slower than they should be" gets labelled P5 low priority bug report, whereas "New AI integration to predict lot income" is P0 must-fix because on Tuesday a sales guy told a potential customer that it'd be in the next version and apparently the lead looked interested so we're doing it.
Not just that, nobody chooses their parking spot based on the UI of the machine.
Banks and phone manufacturers now care about UI, because some of them started to do so, and people started switching to them en masse. US carriers were bleeding subscribers left and right when the iPhone was only available on AT&T, which was the first time people started switching plans to get a specific phone instead of the other way around.
People usually choose their parking based on where they want to go and how far it is from that place, and that trumps all other considerations. Paying more for programmers or parking machine processors would be a waste of money.
Interesting story; I went to park at a downtown lot in my local city (Vancouver BC) and the machine had an unusual UI. So I skipped the machine and scanned the QR code for the app. By the time I had taken the elevator up to the lobby of the building I had the app.
But then the usability on the app was so bad, that I actually could not figure out how to buy parking. The instructions were clear, but the latency on the app was unusable. The Internet connection was fine. It was the app. So I skipped the whole thing, went to dinner, and was happy when I found my car without a ticket.
"Unable to buy a ticket" would have been an interesting day in court.
I live in vancouver and cannot install such apps on my phone. While you may have found the machine's UI unusual, I use them quite often and I suspect that people like me would invalidate your claim... if it went to court. But parking lots aren't the purview of the courts -- enforcement of private parking happens privately, so your sorrows would likely fall on the hardened ears of a privately owned impound lot operator.
My partner and I frequently "race" at the parking game and I win at the "slow" machine nearly every time because the apps are so unresponsive and badly designed.
> Paying more for programmers or parking machine processors would be a waste of money.
The rise of parking apps on mobile adds an interesting angle to this.
No doubt, many of us favour apps because the UX is so much better. Not quite sure if that affects the bottom line short-term, but long-term I’m sure it will.
I worked at a purchasing dept. where each commonly ordered part or service had a six digit item number that had to be entered. The CFO picked some company to do the new version of the software, and they decided to randomly assign new different item numbers which included 13 leading zeros to each item number. So now everyone had to learn the new item numbers and type in a the 13 leading zeros each time.
While this is a decision-making problem, it is also an engineering incompetence problem. No matter what pointy haired boss is yelling about "priorities" ultimately software developers are the ones writing the code, and are responsible for how awful it is.
When it comes to priorities about what to write and what to focus on, the buck stops at management and leadership. When it comes to the actual quality of the software written, the buck stops at the developer. Blame can be shared.
Precisely this. We love to put our colleagues as competent victims of the system, but a competent engineer is unlikely to build an embeeded UI with high latency at their first try. It's a combination of cheap, underqualified labour and careless management.
Certainly one of the benefits of my "Fuck Off Fund" is that for a good many years now it has enabled me to be unburdened by concerns about whether I might get fired for saying what I think to management.
I'm at much lower risk than the imagined target of the "Fuck Off Fund" concept for things like inappropriate sexual contact or coercive control, but I find it really does lift a weight off you to know that actually I don't have to figure out whether I can say Fuck Off. The answer to that is always "Yes" which leaves only the question of whether I should say that. Sometimes I do.
And you know, on zero occasions so far have I been fired as a consequence of telling management to fuck off. But also, I had to think hard about that because, thanks to the fund, I had never worried about it. I've been fired (well, given garden leave, same thing) but I have no reason to think it's connected to telling anybody to fuck off.
If developers prioritize customer experience instead of velocity and cost in situations where that isn't warranted, the company they work for can no longer sell products as cheaply as their competitors do. This decreases their market share and their revenue, which means they'll employ fewer developers in the future.
This is almost an evolutionary process, many (but not all) markets choose for developers which don't care about such things.
> When it comes to the actual quality of the software written, the buck stops at the developer. Blame can be shared.
No. The quality is not prioritized by management. A dev that fails to ship a feature because they were trying to improve "quality" gets fired.
We have no labor power because morons spent the good times insisting that we don't need a professional organization to solve the obvious collective action problem.
The idea that workers are not responsible for their own competence or the quality of their work output is such a bizarre take that you really only see on HN. Just because nobody is forcing you to write quality code, doesn't mean you shouldn't. Nobody is forcing you to bathe or brush your teeth, either, so why do we do it?
Everyone was locked out in a building am staying at (40 something stories) for several hours. When I asked the concierge if I can have a look at the system, it turns out they had none. The whole thing communicated with AWS for some subscription SaaS that provided them with a front-end to register/block cards. And every tap anywhere (elevators/doors/locks) in the building communicated back with this system hosted on AWS. Absolute nightmare.
Yes, but still probably a million times easier for both the building management and the software vendor to have a SaaS for that, than having to buy hardware to put somewhere in the building (with redundant power, cooling, etc.), and have someone deploy, install, manage, update, etc. all of that.
Easier maybe. But significantly worse. Parts of these systems have been build and engineered to be entirely reliable with automatic hand-overs when some component fails or with alternative routings when some connection is lost.
>than having to buy hardware to put somewhere in the building (with redundant power, cooling, etc.), and have someone deploy, install, manage, update, etc. all of that.
You don't need any of that. You need one more box in the electrical closet and one password protected wifi for all the crap in the building (the actual door locks and the like) to connect to.
The IT guy walks in and replaces/restarts the box instead of waiting for the gods of AWS to descent to earth and restart theirs. They have direct control vs. waiting for something magic to happen.
Have you ever actually seen these systems in person? It's usually a microcontroller which already rules out a ton of stuff you're talking about. Serious places will buy 2-3 of them at the time of installation to have some spares. The ones here are "user-replaceable" as well (unplug these three cables, replace the box, plug them back in). It's not some mysterious bunch-of-wires-on-arduino-pins magic box that nobody dares to touch.
The one at my previous office even had centralized management through an RS232 connection to a PC. No internet and related downtime at all. And I don't recall us ever being locked out because of that.
Its absolutely possible to have both a SaaS based control plane and continue functioning if the internet connection/control plane becomes unavailable for a period. There's presumably hardware on site anyway to forward requests to the servers which are doing access control, it wouldn't be difficult to have that hardware keep a local cache of the current configuration. Done that way you might find you can't make changes to who's authorised while the connection is unavailable, but you can still let people who were already authorised into their rooms.
The doors the system controls don't have any of this. Hell, the whole building doesn't have any of this. And it definitely doesn't have redundant internet connections to the cloud-based control plane.
This is fear-mongering when a passive PC running a container image on boot will suffice plenty. For updates a script that runs on boot and at regular intervals that pulls down the latest image with a 30s timeout if it can't reach the server.
> Some PC somewhere with storage is a bigger problem
Both an embedded microcontroller and a PC have storage. The reason you can power-cycle a microcontroller at will is because that storage is read-only and only a specific portion dedicated to state is writable (and the device can be reset if that ever gets corrupted).
Use a buildroot/yocto image on the PC with read-only partitions and a separate state partition that the system can rebuild on boot if it gets corrupted and you'll have something that can be power-cycled with no issues. Network hardware is internally often Linux-based and manages to do fine for exactly this reason.
A large number of embedded micro controllers are just PCs running Yocto linux configured as GP said. You can save money with a $.05 micro controller, but in most cases the development costs to make that entire system work are more than just buying an off the shelf raspberry pi.
You know what else would suffice plenty? Physical keys and mechanical locks. They worked (and still work) without electricity. The tech is mature and well-understood.
The reason for moving away from physical keys is that key management becomes a nightmare; you can't "revoke" a key without changing all the locks which is an expensive operation and requires distributing new keys to everyone else. Electronic access control solves that.
It's also easier to keep all the water for fighting fires in trucks that are remote, than to run high pressure water pipes to every room's ceilings, with special valves that only open when exposed to high heat. Imagine the overhead costs!
A card access system requires zero cooling, it’s a DC power supply or AC transformer and a microcontroller that fits in a small unvented metal enclosure. It requires no management other than activating and deactivating badges.
There is no reason to have any of the lock and unlock functionality tied to the cloud, it’s just shitty engineering by a company who wants to extract rent from their customers.
The server running that system needs cooling, yes. You can't just shove it in a closet with zero thought and expect it to not overheat/shut down/catch fire, unless you live in the Arctic.
There are card access systems that don’t require a computer, just a microcontroller. Perhaps if you need to integrate with multiple sites or a backend system for access control rules you can add computers, but card access systems are dead ass simple for a reason; they need to be reliable. The good systems that have computers still allow access in the event of a network failure.
Any access control system that fails in the event that it loses internet connectivity is poorly designed.
I have a little fanless mini PC that runs various stuff around my house, including homeassistant. The case is basically a big heat sink.
It started crashing during backups.
The solution was to stick a fan on it. :( This is literally a box _designed to not need a fan_. And yet. It now has a fan and has been stable for months. And it's not even in a closet - it's wall-mounted with lots of available air around it.
I'm guessing it's the HDD that's failing. Had such mysterious failures with my NVR (the Cloud Key thingie) from UniFi. Turns out, HDDs don't like operating in 60+ degree Celsius heat all the time - but SSDs don't mind, so fortunately the fix was just to swap the drive for a solid state one.
I think it was the DRAM on mine, oddly. It already uses an nvme ssd. Could have been the CPU, of course - the error was manifesting as memory corruption but that could well have been happening during read or write.
That is, in fact, exactly what we typically see in reality with local access control system head-ends.
At the doors, there might be keycards, biometrics and PINs (oh my!) happening.
But there's usually just not much going on, centrally. It doesn't take much to keep track of an index of IDs and the classes of things those IDs are allowed to access.
The system was not built with resiliency in mind and had no care/considerations for what a shit-show will unfurl once the system or the link goes down. I wonder if exit is regulated (you can still fully exit the building from any point using the green buttons and I think these are supposed to activate/still work even if electricity is down).
> Yes, but still probably a million times easier for both the building management and the software vendor to have a SaaS for that, than having to buy hardware to put somewhere in the building (with redundant power, cooling, etc.)
A isolated building somewhere in the middle of the jungle dependent for its operation on some American data-center hundreds of miles away is simply negligence. I am usually against regulations but clearly for certain things we can trust that all humans will be reasonable.
Now I am waiting for time when they move us-east-1 physical security to run in us-east-1... Thus locking themselves out when needing some physical intervention on servers to get backup.
This is in SEA. They probably operate from ap-southeast-1 or 2. But yeah, if the internet goes down, the provider service goes down or AWS goes down they are cooked.
A lot of modern glass is hard to break. In many cases this is a safety feature (if you can't break the glass you can't get shoved out the window in a fight...)
My first guess was debouncing. They assume that the switches are worn out, deeply weathered, and cheaply made. Each press will cause the signal to oscillate and they're taking their sweet time to register it.
When the device is new this is an absurd amount of time to wait. As the device degrades over 10, 20 years, that programming will keep it working the same. Awful the entire time, yes, but the same as the day it was new.
I was late for a train at my local station and the parking machine was taking ages to respond to keypresses. I could see the training pulling up to the platform and I was still stuck entering the second digit of seven. In my shameful frustration I hit the machine fairly hard. While the button presses might take a while to register, the anti-tamper alarm has really low latency and is also quite loud.
You need to find the right person to complain to. Here we are sympathetic, but can't do anything.
The right person is the other riders on the train - but the hard part is to frame this such that they join you on a march to the the agency that owns that machines to complain. I wish you the best of luck figuring out how to do that (I don't know how to do it - and if I did there are might higher priority things that need to be fixed).
Well it was six years ago, I work from home now and take the train once a quarter, and they've augmented the machines with app parking now so I have nothing to complain about anymore :)
Debouncing would be smart, sure. But sometimes, these sorts of embedded machines are weirder than that.
At Kroger-brand gas stations near me, I get to interact with the buttons on gas pumps to select options and enter a loyalty ID.
Those buttons have visible feedback on a screen, and also audible feedback consisting of a loud beep. And there's always delays between button press and feedback.
Some combination of debounce and wear might explain that easily enough.
Except... the delay between pushing a button and getting feedback is variable by seemingly-random amounts. The delay also consistently increases if a person on the other side of the pump island is also pushing buttons to do their own thing.
It's maddening. Push button, wait indeterminate time for beep, and repeat for something like 12 or 13 button presses -- and wait longer if someone else is also using the machine.
I can't rationally explain any of that variability with debounce.
Or perhaps the original programmers skipped the class on concurrency 25 years ago, and nobody has subsequently bothered to pay anyone to update that part of the software.
One time I decided to test whether these grocery story loyalty card XX cents off per gallon transactions were properly isolated, when my wife and I were both filling up vehicles at the same gas station at the same time. We both got the $0.50 discount per gallon with no problem. I'm sure there are lots of creative ways you can exploit the poor design of these things.
That's programmer incompetence. Unfortunately pervasive, especially with devices like parking meters, EV chargers, and similar, where the feedback loop (angry customer) is long (angry customers resulting in revenue decrease) or non-existent.
It's a nice theory, but many of those terrible parking ticket machines predate smartphones, so it might be the case for machines built now, but it's really hard to imagine that that was the original intention
I work in an adjacent industry, and trust me when I say that a lot of older equipment companies just did not care much about the experience of using the equipment. It's much more important to tick all of the boxes in the back end accounting system than to have a high quality experience on the kiosk.
Each keypress is appended to an 80 line prompt (key name along with timestamp of keypress and current text shown on the screen) and fed to a frontier LLM. Some of the office staff banged on the keypad for a few hours to generate training data to fine-tune the LLM on the task of denouncing key presses.
Thanks to some optimizations with Triton and running multi-GPU instances, latency is down to just a few seconds per digit entered.
You see, we needed to hit our genAI onboarding KPIs this quarter…
Whilst I can not see a motivation I refuse to accept that parking machines are not advisarial design. Why do they have haf a dozen things that look a bit like tap n pay if they are not trying to make it eaiser for card skimmers.
Some of these are just dumb terminals with the entire state handled on a server. I've seen a bunch of them freeze at once where no UI would respond (but the interactions were buffered) and then when the network hiccup was over they all unfroze and reflected the input.
The self service kiosks are intentionally throttled when scanning barcodes, at a guess to prevent people accidentally scanning the previous/wrong item - I once had some problems with one and a staff member flipped it into supervisor mode at which point they were able to scan at the same rate you'd see at a manned checkout.
I think that's handled by the barcode scanner itself, at least on the ones I've used. The scanner will not recognize the same code immediately, but will immediately pick up a different code.
What's slow is that after each scan it needs to check the weight which means it lets the scales settle for one second before accepting another scan.
Now take that, and add someone in our Polish supermarket chain (Biedronka) having the dumb "insight" to disable "scan multiple" option. Until ~month ago, whenever buying something in larger quantity, I could just press "Scan multiple", tap in the amount, scan the barcode once, and move all the items of the same type to the "already scanned" zone. Now, I have to do it one by one, each time waiting for the scales to settle. Infuriating when you're buying some spice bag or candy and have to scan 12 of them one by one.
I scan as fast as a manned checkout (I did my time in retail). And I can scan my groceries at the speed whilst the people next to me spend most of their time rotating an item to find the barcode.
Sorry to rant, but this kind of stuff is the only thing that triggers me. It's gotten so bad that my family makes me put a dollar in a 'complain jar' everytime I talk about how poor quality software has become.
Just one recent example: few months ago, I replaced a Bosch dishwasher with the latest version of the same model. Now, when I press the start button to initiate the cycle, it takes over 3 seconds for it to register! Like, what is going on in that 3 seconds?
How was it possible that even 'kind of good' developers like me were able todo much more with much less back in the 90s? My boss would be like, "Here's this new hardware thingy and the manual. Now figure out how to do the impossible by Monday." Was it because we had bigger teams, more focus, fewer dependencies?
I think we've been trained to accept bad software at this point, and a lot of people don't know anything different.
I suspect that a lot of it is caused by shoving Android onto underpowered devices because it is cheap and seems like an easy button. But I don't know for sure, that's just an impression. I have no numbers.
Could there be an opportunity here, for a specialized kiosk OS or something like that?
What can you expect, when people assume as normal shipping the browser alongside the "native" application, and scripting languages using an interpreter are used in production code?
Maybe that ticket machine was coded in MicroPython. /s
- TCL/Tk slowish under P3 times, decent enough under P4 with SSE2. AMSN wasn't that bad
back in the day, and with 8.6 the occasional UI locks went away.
- Visual Basic. Yes, it was interpreter, and you used to like it. GUI ran fast, good for small games and management software. The rest... oh, they tried
to create a C64 emulator under VB, it ran many times slower than one created in C. Nowadays, with a P4 with SSE2 and up you could emulate it at decent speeds with TCL/Tk 8.6 since they got some optimized interpreter. IDK about VB6, probably the same case.
But at least we know TCL/Tk got improved on multiprocessing and the like. VB6 was stuck in time.
- TCL can call C code with ease, since the early 90's. Not the case with Electron. And JS really sucks with no standard library. With Electron, the UI can be very taxing, even if they bundle FFMPEG and the like. Tk UI can run on a toaster.
- Yeah, there is C#... but it isn't as snappy and portable TCL/Tk with IronTCL, where it even targets Windows XP. You have JimTCL where it can run on scraps. No Tk, but the language it's close in syntax to TCL, it has networking and TLS support and OFC has damn easy C interops. And if you are a competent programmer, you can see it has some alpha SDL2 bindings. Extend those and you can write a dumb UI with Nuklear or similar in days. Speed? It won't win against other languages on number crunching, but for sure it could be put to drive some machines.
I worked on a startup that was mostly powered by Tcl, the amount of rewriting in C that we had to do between 1999 and 2003, when I left the company among all those dotcom busts, made me no longer pick any language without at least a JIT, for production code.
The founders went on creating OutSystems, with the same concepts but built on top of .NET, they are one of the most successful Portuguese companies to this day, and one of the few VB like development environments for the Web.
Forth is usually interpreted and pretty fast. And, of course, we have very fast Javascript engines these days. Python speed is being worked on, but it's pretty slow, true.
Basically it is because Forth programs are fairly flat and don't go deep into subfunctions. So the interpreter overhead is not that great and the processor spends most of the time running the machine code that underlays the primitives that live at the bottom of the program.
Some Forths are dog slow such as PFE compared to GForth. Meanwhile others running in really slow platforms such as subleq (much faster in muxleq) run really fast for that the VM actually as (almost something slightly better than a 8086).
I’ve had that bug a few times. I think it’s related to some roads being closed and/ or under reconstruction. I’ve seen this happen multiple times on the same route, and it always fixed itself after I passed the construction site.
So it might be a hack to get it to take another way -- rather than invent some way of marking a road section as being closed, mark it as being 1000 miles long?
Technically it's not a pause as the pauses introduced by a typical STW tracing GC. It does not stop the other threads. The app can still continue to work during that cleanup.
And it pops up in the profiler immediately with a nice stack trace showing where it rooted from. Then you fix it by e.g. moving cleanup to background to unlock this thread, not cleaning it at all (e.g. if the process dies anyway soon), or just remodel the data structure to not have so many tiny objects, etc.
Essentially this is exactly "way more deterministic and easier to understand and model". No-one said it is free from performance traps.
> And free itself can be expensive.
The total amortized cost of malloc/free is usually much lower than the total cost of tracing; unless you give tracing GC a horrendous amount of additional memory (> 10x of resident live set).
malloc/free are especially efficient when they are used for managing bigger objects. But even with tiny allocations like 8 bytes size (which are rarely kept on heap) I found modern allocators like mimalloc or jemalloc easily outperformed modern GCs of Java (in terms of CPU cycles spent, not wall clock).
We don’t need it in Poland. We’ve been using a similar but official government issued app with ID, driving license, car documents for years now. Works both on Android and iPhone. Can be also used for logging into government web apps like taxes, for document signing or for voting. And it reminds me whenever my car insurance expires or it needs the annual check. Pretty impressive IMHO.
As someone who's always dabbled in electronics, skimmed and read some books, my primary complaint abot most electronics texts is that they just talk about individual topics: oscillators, amplifiers, etc.
What they never talk about, is putting them all together.
But as witnessed by this list, that's what a radio is. A collection of these "meta" components into a whole to get a better radio experience.
A radio built like this, with individual subsystems connected together, is much more understandable. Many (not all) radio schematics are presented as a whole, rather than the parts, or why you might (or might not) want to change one part or another (not components, but one, say, filter circuit to a different one).
It just seems to me that once you get past some basic theory, starting with a radio, and then systematically taking it apart is a better way of approaching electronics education.
"A radio built like this, with individual subsystems connected together, is much more understandable."
Yes! this has been my experience too, building something from first principles and given some tools and direction to experiment you get the chance, and experience, to really learn.
I've been looking for resources like this for building amps but they're either small signal or the whole design. You understand how they work but not where and what to change if you wanted to tinker or build your own.
Haha I know! When I was even younger, we had a radio that could recieve SW,AM,MW,FM(in TV range as well).
I used to hook up the antenna to various things like wire mesh or tv antennas etc and used to listen to short wave and AM for hours. I even got signals from far away countries, it was really fascinating!
Also I had seen some recon antennas in a certain campus (can't say much about that) when I was a kid. Those were like long wires hanging from towers. I believe they used to receive/decode SW/AM signals from far away. I realised this much much later in my life. But fascinating nonetheless.
And adding to all these is SDR! That's a whole different thing.
Oh, thanks, good to know. Now I feel more motivated. Because actually it’s not as easy as it looks from the text books. It’s like with drawing an owl. Yeah, pass the signal through a mixer and feed recovered carrier to its LO port and you’re done. Sure. Simple. Now just recover the carrier. So far I have built a PLL that locks to a clean signal but stops locking when the signal is modulated too much. Aargh.
I wish it was easier to buy a portable radio with one. Though admittedly I tend to use mostly vintage radios - as such I do most of my shortwave listening on a Zenith T-O which is pretty wonderful both in audio quality and capacity to pull in stations.
However, gradle build or mvn install won’t select a proper version of Java to build your code. It won’t even tell you are building with wrong version. Rust, Go, even Scala SBT will.
Normally you just define source and compile target and then use whatever JDK supports them. Dependency on exact version is a rare thing. Neither gradle nor Maven are independent native tools, both run on the same JVM they use, so they are not even aware of your specific OS configuration and available JDKs. But they will certainly tell you if your JDK does not support your source or compile target.
> However, gradle build or mvn install won’t select a proper version of Java to build your code.
build.gradle.kts
java {
toolchain {
languageVersion = JavaLanguageVersion.of(17)
//bonus: pick your favorite distribution with vendor = JvmVendorSpec.<X>
}
}
Oh yes they will.
> It won’t even tell you are building with wrong version.
Right, "Class file has wrong version" doesn't explicitly tell you it's the wrong JDK. Gradle itself runs from a JDK8, so even the install you made with your Windows XP will work fine.
If your last experience with Java was 20 years ago and you think that for some reason it hasn't kept up with modern advancement and leapfrogged most languages (how's async doing in rust? virtual threads still stuck nowhere? cool.), sure, keep telling yourself that. You could at least keep it to yourself out of politeness though, or at the very least check that what you're saying is accurate.
You can work with SMT at home no problem. A decent hot air station like Quick 861dw will cost you just about $300 and you don’t need much more to tinker.
Circuits as black boxes is usually a very leaky abstraction, because how circuits work depends a lot on what’s connected to them. And they have plenty of attributes that can interact in very weird ways.
reply