CarPlay is essentially a conditional pair of video inputs. Any system that supports on-screen rear-view camera and that has a wheel speed sensor can support CarPlay.
What do you really need? Is the car moving forward or backward is the only one you can't figure out from GPS on the phone (this is possibly what they are getting from wheel speed - GPS speed is more accurate if you have a signal).
There are a lot of nice to haves of course. GPS does eat phone battery so better if the car can give you that. There is a lot of other car data that is interesting, why force plugging a OBDII dongle in to get DTCs, RPM, O2 sensor values, or whatever. However for car play to work at all it doesn't need anything more.
Nav apps on phones will use dead-reckoning if they don't have a GPS signal, so they don't really even need the wheel-speed sensor, but I'd guess they use it just to increase accuracy.. e.g. in a long tunnel.
Increasing accuracy is a wild understatement. Dead reckoning with mobile phone hardware won't give you a usable result for long. Maybe your experiences are from tunnels with dedicated beacons to tell the phone where you are.
Most vehicular tunnels aren't too terribly long, if you're stuck in one in traffic, yeah dead reckoning drifts quite a bit. But if you're driving through one for a minute or two, it's sufficient.
On a side note I've personally had bad experience with beacons in train tunnels telling me I'm miles away from where I'm actually at.
Well done! I’ve built this same sort of thing for my family to play with. My advice for the best results:
1) Structure the choices offered by the LLM; add “choice_type” and add instructions to the LLM on what those choices should do. E.g. action, dialogue, investigation, whatever makes sense for the genre—the LLM can even generate these at story start—then “choice should direct the narrator to focus on an action-oriented moment”, “choice should direct the narrator to focus on a dialogue between two or more characters in the scene”, etc.
2) Use reasoning whenever making tool calls for choices, summarize the reasoning, and include it in narrative summaries provided as part of the context for future narrative requests. For example, the combined summary might be: “In the last narrative I wrote for the user, Harry and Luna were startled by the noise coming from the edge of the forest. Important scene developments: 1) Luna and Harry had been approaching the edge of the forbidden forest for the last three narrative turns, and in the turn I just wrote they arrived at the edge. 2) Harry seemed to be the more courageous of the two in previous narrative turns, but in the most recent one, the user’s choice resulted in Harry becoming more deferential to Luna. 3) In the most recent narrative turn, the noise that had been emanating from the forest was now accompanied by a flickering light. I then suggested paths that would allow for character development through dialogue between Harry and Luna (I gave two options here), a path to move the story forward by having Harry take Luna’s hand before running into the forest, and another path that would slow the pace by having Luna investigate the flickering light accompanying the sound. The user’s choice: ‘Have Luna investigate the flickering light.’
3) Add an RNG weighted by story length or whatever works for you that will result in choices that lead the story to a conclusion. Include that direction in the tool call for generating choices, along with a countdown to the finale choice.
This is a rough mental sketch of what worked the best for me, i purposefully left out implementation or application details, as I don’t know what you’re wanting to do next.
How do you come up with this? I feel it is quite hard to formulate exactly what you want from the LLM in general. Is this something you exercised? So good. Or just output from another AI, who knows haha
My answer to this in my own pet project is to mask terms found by the NER pipeline from being corrected, replacing them with their entity type as a special token (e.g. [male person] or [commercial entity]). That alone dramatically improved grammar/spelling correction, especially because the grammatical "gist" of those masked words is preserved in the text presented to the LLM for "correction".
Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit the project's popularity for potentially malicious purposes. It is imperative to rely solely on information from https://Helper-Scripts.com/ or https://tteck.github.io/Proxmox/ for accurate and trustworthy content.
One note: For truly "responsive" text (and other measures, like padding/margins), I often use relative units of measurement. At the very least, rem/em, but more and more I'm using viewport¹ (including dynamic) and container query² units. I'm not your target market, and I know that owning the renderer makes this request more complex that it would seem, but I thought I'd point it out just in case you think it should be on your radar.
This is something we are going to add for sure. Ideally, we will support any unit (em, rem, %) you can use in CSS for any property we expose like padding or font-size.
Well, my wife and I have been on a months-long experiment. We have HomePod minis, Macs and iPhones/iPads in the house. They are able to access the internet without restriction (other than using my own DNS resolver for ad/malware blocking purposes.
Our TVs (2020-era Vizio and 2018-era Samsung) are on a separate VLAN for home automation control, and are otherwise blocked from the internet¹. Additionally, they have the various "content intelligence" features disabled...just in case.
We also have a few Nest devices (the 1st gen wired Hello doorbell cam, The Nest/Yale deadbolt, a 2nd gen thermostat, and some Nest Protects) that are normally similarly segmented, though the Hello is allowed to communicate to the necessary domains for video streaming and PubSub notifications.
On August 1, while on a neighborhood walk without any electronic devices, we formulated the plan: every day, we'd find a reason to discuss mulch² in the presence of various devices in our home. What color of mulch we think would look best around various trees. The virtues of recycled rubber as a mulch substitute. The drainage issues it causes. And so on.
We committed to never searching for mulch online (to hide from the ever-present surveillance online), never discussing it with anyone (to avoid social network effects), never buying it (no data broker can hoover up mulch purchases), not dwelling on any social media post about mulch (analytics, man, it's crazy what that bit of metadata can do)...not even hanging around the garden department of local stores (gotta avoid bluetooth/BLE/wifi tracking).
But I DID disable the DNS blocklists (much to our browsing frustration). And while the smart home stuff remained on its own VLAN, I allowed it otherwise unfettered access to the internet during the month of August.
Since the experiment began, we've seen the net sum of zero (0) targeted ads about mulch. No banners, no interstitial social media posts, no phone calls, no flyers in the mailbox. Nothing.
I really don't believe that our devices are eavesdropping on us, but in the interest of science, the experiment continues for another month.³
---
1) Yes, I recognize that Sidewalk/ethernet-over-HDMI/hard-coded DNS/etc is a purported "thing", but I don't believe it's likely. I'm controlling for this during the month of September by re-enabling the filtering mentioned at the start; if our TVs are committed to exfiltrating surveillance data.
2) We've not really been discussing mulch. I'm using that as a proxy here, because all of the internet is a series of tubes that lead to advertising networks. But we did choose a unique topic of conversation that would be relevant to our demographics, geographical location, and season, and meaningful to advertisers.
3) On September 1, I re-enabled all the blocklists and VLAN network filters/blackholes. But we continue to discuss, er, mulch. Like I said, if our stuff really really wants to phone the mothership to have Big Mulch pay us a visit, there are supposed to be ways for them to do that. Right?
___
EDIT: The topic we chose is also something that's not typically discussed in our social network, nor our kids' social networks. I will say that it's related to a profitable market, and we're in the target demographic, but we did our best to identify a market that we didn't have in common with our social groups.
Data is served through a public MQTT server (dedicated to serve requests for this component) - thanks to geohash-based topics and some other optimizations it greatly reduces amount of data sent to clients comparing to direct websocket connection to Blitzortung servers (it is also required by Blitzortung data usage policy - third party apps must use their own servers to server data for their own clients).