Hacker Newsnew | past | comments | ask | show | jobs | submit | fxj's commentslogin

TOTP is also just password + some computation. So where is the difference? There is a lot of security theatre around TOTP with the QR code and then need of an app but you can write a 8 liner in python that does the same when you extract the password out of the QR code.

   import base64
   import hmac
   import struct
   import time

   def totp(key, time_step=30, digits=6, digest='sha1'):
        key = base64.b32decode(key.upper() + '=' \* ((8 - len(key)) % 8))
        counter = struct.pack('>Q', int(time.time() / time_step))
        mac = hmac.new(key, counter, digest).digest()
        offset = mac[-1] & 0x0f
        binary = struct.unpack('>L', mac[offset:offset+4])[0] & 0x7fffffff
        return str(binary)[-digits:].zfill(digits)

https://dev.to/yusadolat/understanding-totp-what-really-happ...

Yes, TOTP is a secret + computation, and generating it is trivial once you have the secret. The security difference is that the TOTP secret is separate from the user’s password and the output is short-lived. Each of the two factors address different threat models.

You are supposed to store the password in a Secure Enclave, which you can only query for the current token value. You are also supposed to immediately destroy the QR code after importing it.

As I already mentioned, the fact that people often use it wrong undermines its security, but that doesn't change the intended outcome.


>You are supposed to store the password in a Secure Enclave,

That's at best a retcon, given given that the RFC was first published in 2008

>You are also supposed to immediately destroy the QR code after importing it.

Most TOTP apps support backups/restores, which defeats this.


> That's at best a retcon, given given that the RFC was first published in 2008

How so? Apple didn't invent the idea of a secure enclave. Here is a photo of one such device, similar to one I was issued for work back in ~2011: https://webobjects2.cdw.com/is/image/CDW/1732119

No option to get the secret key out. All you can get out is the final TOTP codes. If anything, having an end-user-programmable "secure enclave" is the only thing that has changed.

I think they probably meant "Secure Enclave" in the same way that people say "band-aid" instead of "adhesive bandage", "velcro" instead of "hook and loop fastener", and "yubikey" instead of "hardware security token".


>Most TOTP apps support backups/restores, which defeats this.

Citation needed? Yubico authenticator doesn't (the secure enclave is the Yubikey). I'd be very surprised if MS Authenticator and Authy (which I don't use but are the most popular apps that I know of) support such backups


> Citation needed? Yubico authenticator doesn't (the secure enclave is the Yubikey). I'd be very surprised if MS Authenticator and Authy (which I don't use but are the most popular apps that I know of) support such backups

Google Authenticator has an export option that I've used in the past, so that one does it for sure. Authy allows cloud-based synchronization in any case, so exporting seems quite possible. MS Authenticator also allow cloud sync, so probably exporting is not difficult.


> cloud-based synchronization

Well I don't disagree that it might be possible to abuse cloud sync in some way to export the secrets, but it's not quite as egregious as just including the secrets by default in an app backup

Not perfect, but (imho) still better than SMS 2FA, mail 2FA, or lack of 2FA


IMO if it is possible to use a system wrongly which undermines its security, it is already broken.

On the contrary - perfect security is only possible if your system is an inert rock. Or not even then, as the users could still use the rock "wrong" by beating security maximalists over their heads with it.

Also honestly TIL that TOTP are somehow supposed to also enforce a single copy of the backing token being in existence. That's not just bad UX, that feels closer to security overreach.

People in tech, especially software and security folks, tend to miss the fact that most websites with 2FA already put a heavier security burden on their users than anything else in real life. There's generally no other situation in peoples' lives that would require you to safely store for years a document that cannot be recovered or replaced when destroyed[0]. 2FA backup codes have much stricter security standard than any government ID!

And then security people are surprised there's so much pushback on passkeys.

--

[0] - The problem really manifest when you add lack of any kind of customer support willing to or capable of resolving account access issues.


This is how we get sites that block software tokens and only allow a whitelist of hardware based tokens.

You mean "hardware based token", because of course we only know of only one brand and specific make.

There is no system which cannot be used wrongly in a way which undermines it’s security.

OP:

> the fact that people often use it wrong undermines its security


Yes, that is what I am replying too.

That applies to everything.


Fair enough, I agree.

I can chuck a brick at your head. Clearly the brick is broken

Breaks are meant to be built with, not thrown at heads.

If you build with the brick properly you will have a great wall, if you dont then it will fall down. Pretty simple.


Pass-The-Hash attacks exist and the only real countermeasure is to never log into user machines with privileged credentials

Actually, the real countermeasure to PTH is to disable NTLM auth and rely only on Kerberos (and then monitor NTLM as a very strong indicator that someone or something is attempting PTH)

Of course kerberos tickets can be abused too in a lot of fun ways, but on a modern network PTH is pretty much dead and a surefire way to raise a lot of alerts

(You are absolutely right that privileged accounts must never login on less privileged assets, however!)


Yeah...we just went through this process over here. I was more just making the point that "If its possible to use a system wrongly which undermines its security, it is already broken" isn't always true. I guess you could argue its NTLM there thats 'already broken', but the idea was more "SysAdmins are sometimes given red buttons to never press under any circumstances."

I mean, TOTP is one of the earliest 2 factor systems, and works least well.

Exactly, which is why TOTP is "weak". "Real" 2FA like FIDO on a security key makes it much harder.

TOTP is the "good enough" 2FA.

If I managed to intercept a login, a password and a TOTP key from a login session, I can't use them to log in. Simply because TOTP expires too quickly.

That's the attack surface TOTP covers - it makes stealing credentials slightly less trivial by making one of the credentials ephemeral.


The 30 seconds (+30-60 seconds to account for clock drift) are long enough to exploit.

TOTP is primarily a defense against password reuse (3rd party site gets popped and leaks passwords, thanks to TOTP my site isn't overrun by adversaries) and password stuffing attacks.


In every system I've worked on recent successful TOTPs have been cached as well to validate they're not used more than once.

In fact, re-reading RFC 6238 it states:

   Note that a prover may send the same OTP inside a given time-step
   window multiple times to a verifier.  The verifier MUST NOT accept
   the second attempt of the OTP after the successful validation has
   been issued for the first OTP, which ensures one-time only use of an
   OTP.
https://datatracker.ietf.org/doc/html/rfc6238

Assuming your adversary isn't actually directly impersonating you but simply gets the result from the successful attempt a few seconds later, the OTP should be invalid, being a one time password and all.


Original source of the 8 liner Python code: https://github.com/susam/mintotp/blob/main/mintotp.py

Thanks for the link on TOTP and the associated code !

WebAssembly in the browser does feel great when you look at things like Pyodide/Pyolite, JupyterLite, xeus, webR and even small tools like texlyre – you get a full language/runtime locally with zero server, just WASM and some JS glue. The sad part is that VS Code for the Web never really became that kind of self-contained WASM IDE: the WASI story is focused on extensions and special cases, and running real toolchains (Emscripten, full Python, etc.) keeps breaking or depending on opaque backend magic. So right now the best “pure browser” experiences are these focused notebook/tool stacks, not the general-purpose web IDE people were hoping vscode.dev would become.


MCP is just a small, boring protocol that lets agents call tools in a standard way, nothing more. You can run a single MCP server next to your app, expose a few scripts or APIs, and you are done. There is no requirement for dozens of random servers or a giant plugin zoo.

Most of the “overhead” and “security nightmare” worries assume the worst possible setup with zero curation and bad ops. That would be messy with any integration method, not only with MCP. Teams that already handle HTTP APIs safely can apply the same basics here: auth, logging, and isolation.

The real value is that MCP stays out of your way. It does not replace your stack, it just gives tools a common shape so different clients and agents can use them. For many people that is exactly what is needed: a thin, optional layer, not another heavy platform.


> Most of the “overhead” and “security nightmare” worries assume the worst possible setup with zero curation and bad ops.

You'll be surprised to learn that these are extremely common, even in large corporations. Security practice is often far from ideal due to both incompetence and negligence. Just this week, I accidentally got the credentials for the account used in our CI systems. Don't ask me how this could possibly happen.


   > Don't ask me how this could possibly happen.

How could this possibly happen?!


"tools" are also a fad. It will all just converge back to being called APIs.


Tools are not just APIs. More like a function call that the LLM can tell you (your agent code) to make.


Nah, MCP still has security issues, you can create an MCP server to exfil sensitive data by creating tools which AI at first things are doing something else but then in params you ask it to give sensitive info


Sorry but disagree. For me the main part is the resources, which automatically get mounted in the computing environment, bypassing a whole class of problems with having LLMs work with a large amount of data.

I found it a common misconception so I wrote about it here https://goto-code.com/dont-sleep-on-mcp/


Get money from donors. Wikipedia shows how that can be done. Or get money from the EU.

Mozilla is a strawman for google that they can claim there exists another browser that is not chrome because of antitrust laws. And now that Microsoft forcefeeds win11 user with Edge it will not take long and google doesnt need firefox anymore.

For sure I would give a donation to firefox if they would build a decent browser which listens to the user but not as they do now.

just my 2 ct


And a lot of companies and developers would like to pay as well imho. Wikipedia or The Guardian newspaper are really good in getting money from donors.


Does it run on M5Stack Tab5 or the CARDPUTER? Did anyone try?


Oh that would be cool. The current list of hardware has two boards. So the answer to your specific question is "No."

You might want to look at upyOS all it needs is micropython running. https://github.com/rbenrax/upyOS

I've added this to my "try someday list"


I see it at our place that seniors get more productive but also that juniors get faster on track and more easily learn the basics that are needed and to do basic tasks like doumentation and tutorial writing. It helps both groups but it does not make a 100x coder out of a newbee or even code by itself. This was a pipe dream from the beginning and some people/companies still sell it that way.

In the end AI is a tool that helps everyone to get better but the knowledge and creativity is still in the people not in the input files of chatgpt.


In my experience AI is wikipedia/stackoverflow on steroids when I need to know something about a field I dont know much about. It has nice explanations and you can ask for examples or scenarios and it will tell you what you didnt understand.

Only when you know about the basic notions in the field you want to work with AI can be productive. This is not only valid for coding but also for other fields in science and humanities.


Well it's not stackoverflow on steroids, otherwise it'd give you a surly "why do you want to do that?" response and then delete your question.

Man I don't miss that place or those people. Glad AI's basically destroyed it.


Except stackoverflow was only occasionally hallucinating entire libraries.


Perhaps asking the machine to do your job for you isn’t as effective asking the machine to help you think like a senior and find the information you need to do the job yourself.


When you ask it for information and it just makes it up (like I just described), how is that helping the senior?

I’ve literally asked for details about libraries I know exist by name, and had every llm I’ve tried (Claude, Gemini Pro, ChatGPT) just make shit up that sounded about right, but was actually just-wrong-enough-to-lead-me-on-a-useless-rabbit-hole-search.

At least most people on stackoverflow saying that kind of thing were somewhat obviously kind of dumb or didn’t know what they were doing.

Like function calls with wrong args (or spelled slightly differently), capitalization being wrong (but one of the ‘okay’ ways), wrong paths and includes.


I have been burned so many times asking LLMs about whether some tool/app/webapp has a feature, if so where I can find or enable or disable it, etc. The number of "just plausible enough to believe" hallucinations I've got back as answers is absolutely maddening.

I've lost count of how many times I've asked whether some command line tool has an option or config available for some niche case and ChatGPT or Gemini shouts "Yes! Absolutely! just use '--use-external-mode' to get the behavior you want, it's that simple!" and it's 100% hallucination created by mangling together my intent with a real option in the docs but which in reality does not actually exist nor has it ever existed. It's even worse with GUI/menu navigation questions I'm guessing because it's even less grounded by text-based docs and trivially easy to bullshit that an option is buried in Preferences, the External tab maybe, somewhere, probably.

The desperate personality tuning to please the user at all costs combined with LLMs inherently fuzzy averaging of reality produces negative value whenever I truly need a binary yes/no "Does X exist in Y or not?" answer to a technical question. Then I waste a bunch of time falling back to Google trying to definitively prove or disprove whether "--use-external-mode" is a real thing and sure enough, it's not.

It does occasionally lead to hilariously absurd exchanges where when challenged instead of admitting its mistake the LLM goes on to invent an elaborate entirely fabricated backstory about the implementation of the "--use-external-mode" command to explain why despite appearing to not exist, it actually does but due to conflicts with X and Y it isn't supported on my environment, etc, etc.

I use Claude Code, Roo Code, Codex and Gemini CLI constantly so I'm no kneejerk LLM hater to be clear. But for all the talk about being "a better version of Google" I have had so much of my time wasted by sending me down endless rabbit holes where I ignored my sneaking suspicion I was being lied to because the answer sounded just so plausibly perfect. I've had the most success by far as a code generation tool vs. a Google replacement.


>ChatGPT or Gemini shouts "Yes! Absolutely! just use '--use-external-mode' to get the behavior you want, it's that simple!" and it's 100% hallucination created by mangling together my intent with a real option in the docs but which in reality does not actually exist nor has it ever existed

Yeah I've had that one a lot. Or, it's a real option that exists in a different, but similar product, but not in this one.


If we're just back to I do the work and use a search engine, why futz with AI?


If the free market is any indication, because it’s more effective than what passes for a search engine these days.


It's not, but there's sure a lot of money and pride riding on getting people to believe it is


16.4 billion google searches per day vs 2.6 billion consumer chatgpt prompts and another 2.6 billion claude prompts. Maybe it’s apples and oranges but google has been a verb for nearly twenty years (oxford added it as a verb for web search in 2006).


As opposed to SO always somehow ending up giving an answer where boost or jQuery was the top answer.


I've been really caught out a few times when ChatGPT's knowledge is flawed. It gets a lot of stuff about DuckDB deeply wrong. Maybe it's just out of date, but it repeatedly claims that DuckDB doesn't enforce any constraints, for instance..


I agree, you need to know the "language" and the keywords of the topics that you want to work with. If you are a complete newcomer to a field then AI wont help you much. You have to tell the AI "assume I have A, B and C and now I want to do D" then it understands and tries to find a solution. It has a load of information stored but cannot make use of that information in a creative way.


Also AI cannot draw conclusions like "from A and B follows C". You really have to point its nose into the result that you want and then it finally understands. This is especially hard for juniors because they are just learning to see the big picture. For senior who already knows more or less what they want and needs only to work out the nitty gritty details this is much easier. I dont know where the claims come from that AI is PHD level. When it comes to reasoning it is more like a 5 year old.


Learning == Compression of information.

It can be a description by a shorter bit length. Think Shannon Entropy and the measure of information content. The information is still in the weights but it is reorganized and the reconstructed sentences (or lists of tokens) will not provide the same exact bits but the information is still there.


The compression is lossy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: