Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pretty soon I think there will be a market for CCTV cameras that use a tamper-proof module to sign the video output, with a unique key and a key chain back to the manufacturer. Video evidence simply won't be admissible unless it's signed, and you can present the undamaged camera in court.


> Video evidence simply won't be admissible unless it's signed

Courts today readily admit the testimony of eye-witnesses, which ware notoriously fallible. It is a common misconception among tech people that all systems operate in rational ways, but this is only really the case for machines.


> It is a common misconception among tech people that all systems operate in rational ways, but this is only really the case for machines.

Very elegant and concise. I would say this extends to all deeply complex fields: engineering, physicists, quant, etc. They all have a mindset of "I can understand the whole system because they have logical components that build on top of one another" and so therefore expect and demand that all other fields do the same.


To be fair, wiser engineers seek to discriminate between the rational and irrational parts of a system.


A video of someone committing a crime carries far more weight than an eyewitness account currently.


I dont know if that is really true

TONS of people a convicted based on mainly eyewitness accounts, later to be released when proper evidence is found to exonerate them

Most of the Innocence Project is made up of cases like that


Eye-witnesses can at least be prosecuted for perjury if they're found to be deliberately lying. It's hard to apply the same threat to a CCTV camera.

Honest mistakes are obviously a separate issue, but the parallel there would be bad lighting or a corrupted recording, neither of which are new issues.


"Honest" (subject to police pressure) mistakes in identifying people are vastly more common than a video accidentally showing a different face.


That just restricts the circle of people who can doctor these signed videos, it does nothing to solve the underlying issue.

State actors would love to be able to deepfake someone into a crime scene and then say 'hey look this is signed so it's totally legit'.


This is maybe one of the few relevant use cases of Blockchain technology: Cameras can add video checksums to a public audit log secured e.g. by proof of work or another trust mechanism. If enough parties archive this log it will be very difficult to forge videos without tampering with the actual recording device, because duplicates as well as tampered videos created at a later time would be easy to detect in the data, and the replication as well as proof of work would make it very difficult to forge the entire audit log.


You don't need a blockchain or proof of work for that, timestamped digital signatures are a thing that's being used already, you just need a proper trust infrastructure.

A blockchain with proof of work solves the problem of decentralization and absence of trust; however, for legal matters, having a centralized root of trust is the simpler way to go and requires much less resources.

However, tampering with the actual recording device is a very relevant risk - I struggle to imagine an attacker who has the desire and capability to make some serious crime, and convincingly fake a video as part of it, but would be foiled because they can't figure out a way to upload it properly in the exact manner as a real camera would.


Can't state level actors take over 50.1% of the computing pool and achieve the same result?


In my understanding 50.1 % attacks are less of an issue here as they would be easy to spot since individual parties still have the old blockchain when the adversary publishes the new one, so by comparing them the forgery could always be reconstructed (if not averted).

As far as I understand this is a problem for Bitcoin because even temporary forging of the chain allows the adversary to double-spend funds, and once they are exchanged for real-world money/services they're gone. For an audit chain this shouldn't be a problem as it's only for logging and there is no monetary value tied to the chain. Also, the chain could be anchored with a traditional trust model and wouldn't need to be completely trustless like Bitcoin.


You can detect a 50.1% attack but how do you reconcile which fork is legit?


You cannot, but with the two forks you can see that someone tried to tamper with the signature of a given video as there will be conflicting signatures. That alone can make tampering unattractive for an adversary.


How so? What if all I want to do is cast doubt on a legitimate video? The point of the video manipulation is to manipulate the human response and casting doubt is just as effective. Moreso since the average person is not familiar with the technology & thus defaults to "eh - it's all fake" because there's no way they can distinguish the likely real from likely fake from "too hard to tell".


Not all blockchains are distributed and vulnerable to 51% attacks. I suppose that's what the GP was hinting at.


Merkle trees don’t require proof of work.


Trust is a part of nearly every system. At some point you're just going to have to trust a person or company to get some value in a practical way.


Yeah this is just another chain of evidence problem. We trust cops not to plant evidence, so long as they follow strict chain of evidence procedures. It’s an imperfect system but it’s largely effective.


And since it's not much different than planting evidence we know for certain that it's going to happen.


The damage will really lie where it already does today, where people consume info without any filters in whatsapp groups within their bubbles. If they can believe a photoshopped image with a caption, they will be much more inclined to believe video "evidence", and no explaining of deep fakes will convince them otherwise.

Any tamper proofing process may be viable for news outlets and courts, but if the technology is easy enough to use there will be no escaping spreading of fabricated facts.


There will probably be an intermediary step: in the same way that one can try to find evidence that a picture was "photoshopped" or an audio recording was tampered with, there are probably hints (for experts) that a video was edited.


And conspiracy theorists will apply that same level of skepticism to unedited video, seeing those hints everywhere.


Display fake video on screen. Point video-signing camera at screen.

You now have a signed fake video. You're welcome.


Commercial DVRs already sign video as its recorded and have for years.

It's more about chain of custody/location and time verification. Video authentication can't be used to tell if a video is real because it may just be a "real" recording of faked content. But you can say with some certainty that a video came from DVR X at Y time.


That should be easy to defeat: Patch into the camera sensor and feed in your own data, which the module will sign.


Presumably the whole thing would have to be tamper-resistant. Try to get to the CCD and it fries itself.

And sticking a monitor in front of the camera probably wouldn't work, at least if telesync/cam rips are anything to go by :)


How would this tamper-proof technology work? The first thing that came to mind was paper money having textures/etc to prevent counterfeiting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: