Hacker Newsnew | past | comments | ask | show | jobs | submit | more hereonout2's commentslogin

I've found this memory across chats quite useful on a practical level too, but it also has added to the feeling of developing an ongoing personal relationship with the LLM.

Not only does the model (chat gpt) know about my job, tech interests etc and tie chats together using that info.

But also I have noticed the "tone" of the conversation seems to mimick my own style some what - in a slightly OTT way. For example Chat GPT wil now often call me "mate" or reply often with terms like "Yes mate!".

This is not far off how my own close friends might talk to me, it definitely feels like it's adapted to my own conversational style.


This one seems relatively cheap and ships from Germany

https://www.bosgamepc.com/products/bosgame-m5-ai-mini-deskto...


Don't know if birds count, but the egret population has exploded in the UK in the last 10 years.

There's zoos here that have them in their exotic bird sections. Always makes me smile as they are often visible even in London parks and rivers.


I used it everyday for the first 5 years of my career, got my first big job off the back of a shorter graduate job that exposed me to it.

It will always have a special heart too.

However, in the way it was used in my roles at least - I found it enforced far too rigid a separation between the data and the presentation.

There were multiple times a backend could not perform some function or transformation of the data, for various and always non technical reasons.

That left it to the xslt developers to figure out a solution, and sometimes due to the limits of the language that involved writing a custom java function / xslt plugin.

Things that were incredibly simple when some sort of scripting language is available in your frontend web app could be incredibly convoluted when all you had was an xslt processor.


It's also very simple and free of ads or any other extraneous clutter, a bit like hacker news, which is also fast.

There's probably a lesson in there somewhere.


No, if anything I was disappointed to read within 20% was correct! (I played it before reading your post!)


Initially the win criteria was within ±10% of the correct answer, but 15 minutes ago I changed it to ±20%. My rationale here is that the goal of the game is to get within the ballpark of the correct answer. And a guess of 80 billion when the correct answer is 100 billion seems quite good and indeed should probably win the game.


Thank you for making this.

I have an idea for a gameplay that I think I would enjoy more:

  - If the first guess is within a factor of sqrt(10), then you win.
  - If not, you are given two choices for the second guess: Up or down.
  - Up and down are 10x higher and lower guesses (making them adjacent ranges to the first guess).
  - If the second guess is wrong, you lose. No more guesses.
The point is that the second guess makes you rethink the original question once more, to figure out what it was that you missed. Which is more fun that doing bisection.

I wrote 10x and sqrt(10) to make a game literally about orders of magnitude, but you could of course you smaller numbers, like 4x and sqrt(4), to make it harder.


I greatly appreciate your suggestions, munch. I really like it, but I worry that the game loses some of its mainstream appeal through that. I don't know, I have to look into this in more detail.

However, I did find a solution to bring the focus a bit away from the binary search/bisection.

Namely, the game now shows a hint after the second incorrect guess. For example the hint "The US covers 1.87% of the Earth's surface." is displayed for the question about what percentage of the Earth's surface is land.

This of course lets you, just as you wanted, rethink the original question once more now in light of new information.

text: I think I found a solution to bring the focus a bit away from the binary search and would greatly appreciate feedback from you.

The game now shows a hint after the second incorrect guess. For example the hint "The US covers 1.87% of the Earth's surface." is displayed for the question about what percentage of the Earth's surface is land.

How does the new information received through the hint impact your guess and assumptions? help


You forgot the part about when you actually get to the content, there's usually about 5 paragraphs of SEO filler text before it actually gets onto answering the topic of the post.


You are lucky if they even answer.

Most of those are like:

    $movie release date
    
    <five paragraphs of garbage>
    
    While we don't know the actual $movie release date yet, ...


These are the worst things ever


I have noticed that a lot. For example:

What is the price of the Switch 2?

The Switch 2 can be purchased with money. <Insert the Wikipedia article about currencies since the bronze age>


Recipe for Foo. Foo has always been my favorite dish. I fondly remember all the times my grandma made this for me. My grandma, who was born on August 2, 1946, as the daughter of… (10 more pages of text) To cook Foo the way my grandma did, you first need some Bar. Bar is originally native to the reclusive country of… (20 more pages of text)


You forgot 4 paragraphs text about how they went on a journey of self discovery, that lead to them spending time in the remote village of Y, learning the traditional methods of cooking the dish.

The dish in question is a ham sandwich.


Yeah recipes are the worst. I least the acknowledge themselves and give you a “jump to recipe” button most of the time. I sometimes hit the print button and just use the preview screen too.


I don’t think recipes are much at actual fault here. It seems the fault of search engines preferring returning recipes with longer stories over just-the-recipe blogs or sites like AllRecipes. We humans just have to suffer as a result of the artificial selection of what the search engines wants for us to experience.


It's not just that: recipes on their own are, AIUI, not copyrightable.


https://cookingforengineers.com is giving 500s for me. Per the Wayback machine it was working as recently as last month. They do include background stories but they're much better about this sort of thing. (The old-school aspects of the page layout also help.)


Paprika (an app for storing recipes) can parse out the ingredient list and directions from a webpage. It's surprisingly good at it.


thank you for this! i'll check it out


I don't even know if the recipes themselves are real and tested any more or just slop.

It seems like it's more often than not that I'm coming across dishes that just do not make sense, or are poorly plagiarized by someone who doesn't understand the cuisine they're trying to replicate with absolute nonsense steps or substitutions or quantities. I used to have a great success rate when googling for recipes but now it's almost all crap, not even a mixed bag.


Big Mama's Best Brownie Recipe.

Let's start at the beginning. I was born in 1956 in Chicago. My mother was a cruel drunk and the only thing my father hated more than his work was his family.


This might be a hot take but I'm usually fine with this... If its authentic which most of the time it isn't.

But I don't know, I feel like personal stories are what really makes a blog worth reading?

I don't like it when it's unnecessary "info dump" type. Like, "we all know the benefits of garlic (proceeds to list the well known benefits of garlic)". It's not personal or relevant.

I just want there to be a well formatted way of viewing the recipe at the bottom for quickly checking the recipe on a second or third visit.


Sure, but there's a time and a place, and when I'm looking for a recipe, especially if I'm landing on a site for the first time and don't even know who the author is yet, it's the time and place to do the shopping or the cooking, not for reading even an interesting origin story.


I discovered justtherecipe.com and never went back. So far it's free and ad-free, though I suspect that will end soon.


This is usually okay... what's not okay is that usually this narrative is broken up by ads, a constantly changing layout as you scroll, and eventually jumping so many times you can't resume scrolling, then eventually crashing because too many trackers/ads/etc overwhelmed the browser (on mobile).


No, it's not okay. I used google to look for a brownie recipe; I want a brownie recipe and nothing else.


Now that’s a recipe I would read. We can fold in the failing publishing industry and have authors presented by King Biscuit Flour.


And then the part where you have to create an account to read past the SEO filler :(

It's so sad, cause it drags down good pages. I recently did a lot of research for camping and outdoor gear, and of course I started the journey from Google. But a few sites kept popping up, I really liked their reviews and the quality of the items I got based on that, so I started just going directly to them for comparisons and reviews. This is how it's supposed to work, IMHO.


Outdoor Gear Lab is great, it’s true.


Nailed it :D


And that when the adverts refresh all the content on the page shifts and you lose track of what you have read.


or even worse, the page itself is just an AI summary of the topic


not to mention the mandatory cloudflare "are you human" pre-vetting page Im seeing on 15% of sites.

jesus wept.


And that I often have to wait for it to automatically get through it, which it does not, requiring me to click to verify I am indeed a human. Even if I am not even using Tor or VPNs.


Assuming that clicking to verify even works, which is shaky. On Safari, it seems to just loop me most of the time... which bums me out, since I generally don't have many issues with Safari.


Good news! Now they are often AI drivel too. So you can get an AI summary of more AI crap.


This is most of the results on the first page of Google search are AI slop.


Either that or fifty paragraphs of ai slop blathering in circles about the topic.


Scripts ending up in different places is nothing to do with Debian packaging though. It just puts them exactly where you tell it - you're in complete control of that.

If you're unaware, the FSH lays out some guidelines a lot of distros follow - /usr/bin is a good place for any executable coming from a package in my book.

Agree debuild can be opaque as it is really a wrapper around loads of dpkg-* scripts. Digging further into those helps but it's not obvious.

WRT git, I find it useful to get the entire repo as a tar.gz and treat it like an upstream source package. The have the Debian build stuff as a process on top of that.

This is how many packages are maintained for real in the distro (i.e. imagine being the package maintainer of redis or vim and do it that way). It kind of makes sense to me to follow the pattern as things are geared up for it.


Yes, it is incredibly easy if those files have no external dependencies - say a go binary for example.

The format of the .deb package itself is also pretty simple and straightforward.

But historically and probably now even almost all packages were nothing like this.

When you need to target particular shared libs as dependencies and link against them, confirming that via build isolation, etc - which is what the vast majority of packages have to do - then all the other complex tools become a necessity.


Are you acquainted with the new maintainers guide rather than the wiki?

To be honest I found it an incredibly comprehensive overview of Debian packaging, all the way up to using pbuilder to ensure dependencies and sandboxed builds, onto lintian to assess the quality of the artifacts.

https://www.debian.org/doc/manuals/maint-guide/

Building complex Debian packages is time consuming with a lot to learn, but to be honest I don't remember having many issues with this guide when I started out.


You are only making the original author's point for him even more.

I didn't even know this guide existed, for example -- because of all the existing noise that exists in the same space.

I have managed to build Debian packages (and even self-host a repository) IN SPITE of the existing documentation, not because of it.


That's not the experience I had.

The guide I linked to used to be linked to from the main docs page I believe - I went to double check and it now has this more recent guide linked instead - it seems equally thorough.

https://www.debian.org/doc/manuals/debmake-doc/

These guides were sufficient for me to learn packaging pretty complex Debian projects, and are linked to from the docs home page on the Debian site. Guess that's all I'm saying.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: