Agreed. Having a "HTML + CSS" engineer on the team was largely due to the number of hacks needed to make css work -- purposely adding invalid html that would only be parsed by specific browsers, ie5 vs ie6 vs netscape being wildly different (opera mobile was out of this world different), using sprites everywhere because additional images would have a noticeable lag time (especially with javascript hover), clearfix divs to overcome float issues. To be clear, I'm not saying "things were harder back then" or "css is simple now", but things with CSS were so wild and the tooling was so bad, that it what a unique skill of it's own that is less needed now, and the shift has been for people to focus on going deeper with js.
feedback: i'm trying to understand the highlevel flow/usage but still a bit confused.
ideas:
1) maybe add a .yml example to the readme under the /test_nyno screenshot so i know how you configured that workflow
2) what are ways to trigger a workflow? just tcp?
3) the example runs "bestjsserver" which looks like it runs workflows? are some workflows auto running? that just logs from tcp commands you manually running in other terminal? a bit confused what's going on here
3) Workflows in the workflow-enabled folder are automatically loaded and available, so you can execute those workflows via the drivers (ex. for Python, JavaScript, PHP projects).
You can both execute dynamic workflows (like executing a scripting language) or put .nyno files (YAML) in the workflow-enabled folder, so you can run the workflow using the runWorkflow() functions specified in the drivers.
If I'm correct that the data center shown is on Belfort and Glenn Dr, then you don't even have to "research" zoning, just look across the street.
In 2021/2022 before it was built:
* Here is what that lot looked like [1]. To assume something wouldn't be built there is optimistic at best. (And there was precedent for data centers at the time - there was already a data center less than half mile away on Vantage Data Plz across the street from Tart Lumber.)
* If you look across the street, ie if the video would have panned to the left, you would have seen the "US Customs and Border Patrol" building - not winning any architectural design awards [2].
For someone who bought their house decades ago, then yes - the area has transformed drastically. But grouping someone who purchased recently with someone who purchased decades ago is a bit muddled.
Also live near the data centers. While I'm obviously not 100% sure, I think that blue one is on Glenn Dr - for a bit of context, that is less than 3 miles (drive) from the Dulles International Airport. I'll see if I can record a video of the sound later tonight.
But there is an upside to this - you get the benefits of being a city with big business (tax revenue, donations to the local schools, investments in infrastructure), but don't have increased commuter traffic.
Are those benefits the norm in most cases though? I'm genuinely asking and don't know either way, but the companies building these data centers have quite a reputation for aggressively pursuing subsidies and tax avoidance strategies. Amazon for one has paid little to no federal taxes some years and they wouldn't be my first pick as an example of a magnanimous corporate entity, to say the least.
Whatever benefits there are have to be weighted against the very real costs. Residential power bills spiking is a hugely regressive burden for many struggling households.
Total beginner question: if the “structured software application” gives llm prompt “plan out what I need todo for my upcoming vacation to nyc”, will an llm with a weather tool know “I need to ask for weather so I can make a better packing list”, while an llm without weather tool would either make list without actual weather info OR your application would need to support the LLM asking “tell me what the weather is” and your application would need to parse that and then spit back in the answer in a chained response? If so, seems like tools are helpful in letting LLM drive a bit more, right?
If you have a weather tool available it will be in a list of available tools, and the LLM may or may not ask to use it; it is not certain that it will, but if it is a 'reasoning' model it probably will.
You need to be careful creating a ton of tools and displaying a list of all of them to the model since it can overwhelm them and they can go down rabbit holes of using a bunch of tools to do things that aren't particularly helpful.
Hopefully you would have specific prompts and tools that handle certain types of tasks instead of winging it and hoping for the best.
Depends on the definition of SPA, but in the days of jquery, I hardly consider any of that single page app. For example, the server rendered page had most of the html initial rendered, jquery would attach a bunch of listeners, and then on an update it incrementally. If lucky, we had a duplicated x-template-mustache tag which had a logic-less template that we could use to update parts. Jquery and duplication was the “problem” which drove everyone to SPAs.
When they get to what their implementation is, I’m not even sure what it is. Like how is the following different from a library, like acts-as_state_machine (https://github.com/aasm/aasm). Are they auto running the retries - in a background job which they handle (like “serverless” background job)?
“Implementing orchestration in a library connected to a database means you can eliminate the orchestration tier, pushing its functionality into the application tier (the library instruments your program) and the database tier (your workflow state is persisted to Postgres).“
"Cregit" tool might be of interest to you, it generates token-based (rather than line-based) git "blame" annotation views: https://github.com/cregit/cregit
I learned of Cregit recently--just submitted it to HN after seeing multiple recent HN comments (yours being one I've had open in a tab for a week to remind me! :) ) discussing issues related to line-based "blame" annotation granularity:
To see all commits that have touched lines 120-150 in filename.txt, and see those lines. Gives a view into a subset of lines but won't help if the code in question moved out of the line range.
In Jetbrains you select a line of code and say "show history for selection" and it parses all that for you and just gives you a history of commits affecting that specific line. It's got a built in commit browser and visual diff tool.
Not sure about a GUI way of doing this, but in the CLI, I use `git log --patch [path]` all the time. It will show you the history of diffs for that one file.