He has a bitcoin and ethereum address. Unfortunately no monero, but you can swap in and out of monero without KYC or login using one of the services listed on kycnot.me. Alternatively, there are zero-knowledge mixers on ethereum such as tornado cash (still legal in the UK to the best of my knowledge).
We know that the human brain is able to generate qualia (conscious experiences) despite having no model for how these are generated. (To be clear, by consciousness, I mean the ability to have conscious experiences such as experiencing a color or pain, not self-awareness.) On the other hand, the hypothesis that a Turing machine on its own could generate conscious experiences leads to many seemingly absurd scenarios. Notably, one has to ask how a simulation of supposedly conscious Turing machine using pen and paper could possibly be conscious, or indeed, why one would need to "run" a Turing machine for consciousness to arise and why a mere description of it would not suffice. And how could the mere description of a Turing machine (or equivalently, some C code) be enough for all of its unlived life's consciousness to manifest? If that were the case, one would have to concede that the set of all possible conscious Turing machines is conscious and their experiences are manifested already. If that were the case, then it's hard to see any point in moral reasoning, so for the purpose of debating moral and ethics, I think we can rule this out.
Now, one might propose that consciousness only arises when a computational process is physically run in certain ways but not others (this is what proponents of Integrated Information Theory (IIT) typically believe). Assuming this is the case, then implementing a potentially conscious process with biological neurons presents much higher moral hazard vs an implementation of the same process electronically, or safer still, on a Von Neumann machine.
I would go even further however, and propose that a conscious being (e.g. a being capable of generating the qualia of the color red for instance) cannot be simulated, i.e. conscious processes are generally noncomputable. Why? Well, consider what Chalmers calls the meta-problem of consciousness, which is to say the problem of why we perceive there to be a (hard) problem of consciousness in the first place (and why we are having this very conversation). A simulation of a conscious being would by definition present the same behaviours as that being (given the same stimulus, but for simplicity, we can consider the stimulus as part of the simulation itself without loss of generality). Therefore a simulation of myself for instance, would generate the same thoughts about consciousness itself, and this very same text. But, if we accept the proposition of my first paragraph —that a Turing machine on its own cannot be conscious— this would imply that our whole thought process surrounding consciousness and indeed our very belief that we are conscious is purely coincidental. After all, the unconscious simulation of myself would claim and "believe" just as strongly as I do that it is conscious while that is not the case, which means that the process by which it derives these thoughts and conclusions would be wholly unrelated to the object of these thoughts (actual consciousness).
As such, it is my fairly strong belief that there is some physical "device" in our bodies which allows us to generate qualia and get feedback allowing us to store a record of these experiences. If I had to guess, I would say that this "device" is very likely located in our brains, and that it is quite likely spread throughout our neurons and possibly each one of them.
It should be noted that although I do not believe I can be simulated in my entirety for the above reasons, I do believe that I could likely be emulated with a high level of accuracy from the perspective of an outside observer. Actually we see this already with ChatGPT being able to play the role of a conscious being. But unconscious objects appearing conscious is nothing new in a sense since even a novel (especially told from a first person perspective) can be thought of as such an object already. It will be very interesting to see whether AIs trained without reference to the concepts of consciousness (a hard task to filter that out of the training data!) will ever present signs of consciousness. That would certainly put into question my above philosophical reflections.
---
So to summarise and answer your question more directly, I think there is something about our universe that allows for the generation of conscious experience and that our brains, likely on the neural level, have a bidirectional interaction with this something. As to what this is and how it works, I have no clue. Roger Penrose for instance put forth the idea that this might be related to quantum mechanics and certain molecular structures in our neurons capable of interacting with the quantum world in specific ways, but this is still pure speculation.
More importantly, we know from our own experience that interconnected biological neurons processing information and put under stress (rewards and penalties) are capable of generating conscious experience including very negative ones. And, whereas I believe there is good reason to assume that Turing-equivalent processes such as electronic circuits are not capable of consciousness, I strongly believe that artificially created biological neural network are very likely to be conscious, perhaps even at a fairly small scale already.
So, yes, I think any work creating artificial information processing systems using biological neurons needs to very tightly regulated, if not stopped entirely. At the risk of sounding dramatic, we might accidentally create hell on Earth if we are not careful, at least if biological computing ever becomes competitive with transistor based computing, which until now I'm glad has not looked to be the case... but I'm starting to worry.
> On the other hand, the hypothesis that a Turing machine on its own could generate conscious experiences leads to many seemingly absurd scenarios. Notably, one has to ask how a simulation of supposedly conscious Turing machine using pen and paper could possibly be conscious, or indeed, why one would need to "run" a Turing machine for consciousness to arise and why a mere description of it would not suffice.
Applying this logic, heat is also fundamentally mysterious. What even is heat? Heat definitely exists, but is a mere description of it enough? If I run the simulation of a universe with heat; is that heat?
I'm not sure what you're getting at. There's a fundamental difference between such physical concepts, which I can ultimately describe mathematically (at various scales and levels of accuracy), and conscious experiences.
For instance, I can sensibly ask "what's it like to be a cat?". But "what's it like to be a rock?", or "to be a hot rock?", or "a cold rock?" doesn't make much sense since there's presumably nothing that it's like to be a rock regardless of its temperature.
> I'm not sure what you're getting at. There's a fundamental difference between such physical concepts, which I can ultimately describe mathematically (at various scales and levels of accuracy), and conscious experiences.
You can describe heat mathematically the same way you can describe the interactions of every atoms in a brain mathematically but neither yields/explains why it is the way it is. It just is.
> For instance, I can sensibly ask "what's it like to be a cat?". But "what's it like to be a rock?", or "to be a hot rock?", or "a cold rock?" doesn't make much sense since there's presumably nothing that it's like to be a rock regardless of its temperature.
I don't understand this analogy. What you are doing is ultimately putting a mirror on yourself. You and I have no idea what it is like to be each other. In fact, I would argue you don't even know "what its like to be yourself from 1 day ago". You will ultimately be just reflecting your own current experience unto your supposedly previous self.
So, "what its like to be a rock". I don't know. Concsiousness is just that mysterious. If you lay out an array of iterations of my body. Where i=0 is my whole body, and i=1 is my body minus 1 atom, and this goes on up to N of my atoms. Then at what index does conscious stop and start? To say that a rock and an atom to not have a consciousness (however different/miniscule in experience they are) is to put hard wall at some index K on this array. I just don't think that's true.
>You can describe heat mathematically the same way you can describe the interactions of every atoms in a brain mathematically but neither yields/explains why it is the way it is.
My whole argument was that I don't think one can describe the interactions of every atoms in the brain in a computable form. But actually, I would go further and say they likely can't even be described mathematically.
If this sounds crazy, consider that most mathematical objects are not describable (i.e. can't be singled out). For instance, most real numbers cannot even be imagined, and this stems from the fact that we can only describe things in a finite number of symbols, i.e. in bijection with the set of natural numbers, which is (infinitely) smaller than the set of real numbers.
>I don't understand this analogy.
This wasn't an analogy but an example to show the fundamental distinction between nonconcsious things, which can be dissected, described mathematically, and simulated (although it might be possible for something to be nonconscious and noncomputable at the same time, but we have no reason to believe that such things exist), and conscious things, which, at least in some very small part of them, cannot.
As for a rock being conscious or not, I choose to assume it's not for simplicity and because that seems sensible, but I'm not totally against panpsychism in principle.
> So to summarise and answer your question more directly, I think there is something about our universe that allows for the generation of conscious experience and that our brains, likely on the neural level, have a bidirectional interaction with this something.
What makes you think that this relationship is bidirectional? AFAIK and experience, the relationship is entirely unidirectional. I literally have no idea what I am going to do next. Doing so would require me to think to think what to think.
brain-to-???: certain flows of information within our brains (somehow) trigger the creation of conscious experience
???-to-brain: our brains keep records of conscious experiences and these records are why we're able to have such conversations as these
where ??? = Consciousness, the Universe, God, your spirit, your soul, or however you want to conceptualise it (I don't know or claim to know, although I would tend to argue that your spirit/soul isn't a thing (and neither mine of course), but that's another story)
You might naturally question how we could possibly store records of conscious experiences in our brains if these are indeed not things of the realm of computation. I think we can make an analogy with a camera here. A camera captures photons, but ultimately it does not store the photon but only some numbers (pixels, bits) which represent how to restore the original photons (although with a lot of loss). Now when these bits are fed into an appropriate device, e.g. a screen, some photons vaguely resembling the originals can be reproduced.
I see the brain as similar in that qualia cause excitation in our brain which are recorded in our memory and can be replayed in some dulled down form later, at least sometimes. But our brain doesn't store actual qualia but just records of them or perhaps just of having experienced them. If this were not the case, we would not be able to have this conversation.
To avoid further confusion, it might be worth pointing out that our brain might well store records of qualia and of physical/informational stimuli simultaneously, or sometimes perhaps just of one of these. So I'm not saying our brain reconstructs images from records of conscious experiences of those images for instance (but maybe, I don't know).
>You have zero control over what you will sending to the brain.
I agree. I never said that ??? is "me" here.
To be honest, I've come to the daunting conclusion that the metaphysical self (as opposed to the psychological/physical self) is an illusion, resulting from our brains' memories. Consciousness wise, the me of tomorrow, or one hour from now, or one hour ago, etc., is just as distant from me now as you are from me now, or as a dinosaur millions of years ago is. And when you or anything else suffers greatly, this is just as much a concern as if I were to be told I will experience the same suffering in the future. Although my primate brain (thankfully) does not allow me to experience quite as much angst over others' suffering as my future self's.
As to how I come to this conclusion, just as most of us here would agree that we do not have eternal souls, since elements of our personality, memory, and so on, are by all indications stored inside our perishable brains, by the same line of reasoning, and applying Occams's Razor again, we should not have a separate "spirit" (even of the barest form) since our impression of our lives being individual and continuous is the result of the same brain structures. Shedding the idea of the "spirit" also has the nice benefit of solving the various paradoxes of Star Trek teleporters, atomic scale cloning of people, and so on.
There is also no reason to presume that our time exists at the level of consciousness, and the fact that general relativity precludes the existence of a canonical time ordering for the universe (Block Universe) also points me in that direction.
> ???-to-brain: our brains keep records of conscious experiences and these records are why we're able to have such conversations as these
There's no such thing at ???-to-brain. You have zero control over what you will sending to the brain. And whatever it is that is sending that message to the brain, that is not "you". It is just another phenomena of the universe. In fact, "you" don't have any control over what you are going to do next. When I "command" my brain to lift my hand up, I ultimately have no idea where that command is coming from.
It is more like "brain <-> ???? <-> outside world" or "reality<-> ??? <-> reality" because we are just a witness to whatever the reality is doing.
Xbox actually recently came out with an Xbox branded minifridge and it immediately made me think of this. Sounds like it's an april fools joke but Im pretty sure it's legit
Amazing book, written by a former Madison Avenue employee (the subject of "Mad Men")
I think someone should make a movie or TV series out of this, it'd be the perfect parody of Mad Men in particular and our messed up hyper-corporate dystopia in general
Kleros has some complexities that we dont need, while we need some things that Kleros doesnt have, ie potential reviewers are not necessarily just token holders, but other users with reputation above certain level etc.
Hidden services are safer in the sense that your connection can't be deanonymized with the help of your third relay (which would have been an exit node in the case of a clearnet connection) but if the hidden service in question were to be a honeypot and your entrypoint (ISP or tor guard node) were to be monitored by the same entity (this second requirement also holds for clearnet connection monitoring BTW), it would be possible to deanonymize your connection to the hidden service.
How easy it is to perform the traffic analysis would have to depend on the amount of data being transferred, if I had to guess, so downloading a video would probably be worse than browsing a plaintext forum like hackernews. But if we're talking about a honeypot, your browser could be easily tricked into downloading large-enough files even from a plaintext website (just add several megabytes of comments in the webpage source for instance).
> In order to be really anon you would need a custom client side engine that randomizes the order of external resources, and pauses/resumes requests (given 206 or chunked encoding is supported), and/or introduces null bytes to have a different stream bytesize after TLS encryption is added.
It's unclear to me how any of this helps avoid traffic analysis. I believe tor already pads data into 512-byte cells, which might help a little bit.
> Tor pretends to be secure, but is dark and compromised.
Citation needed. Please stop with the "tor is compromised" meme... and what do you even mean by "dark"? What the hell... Tor is by no means a perfect anonymity solution but it's to my knowledge the best we've got. It's certainly way better than a VPN or no anonymization at all.
More specifically, tor anonymity is limited by the fact that it's low-latency. This is a fundamental limitation of any low-latency transport layer and not the fault of the tor developers or any obscure forces. In particular, if your attacker has control of both your entry point (your tor guard node or your ISP) and your exit point (tor exit node, or the tor hidden service or website you are connecting to) , it becomes possible to de-anonymize your connection (to the specific exit point in question) through traffic analysis. There's just no way around that for a network meant to transport real-time traffic (as opposed to plain data or email for instance). And yes, it stands to reason that various intelligence agencies will have invested in running exit nodes or entry nodes but this is just unavoidable. What you can do counteract this is to run your own nodes or to donate to (presumably) trustworthy node operators.
I think it's also worth noting that although tor can by no means 100% guarantee that you will be free from government surveillance at all times, it does make mass surveillance more difficult and more error-prone, and to me that's the whole point. Furthermore, although government surveillance cannot be thwarted 100%, tor does make corporate surveillance basically impossible (assuming you can avoid browser fingerprinting; this is what the tor browser is for).
All in all, I can't claim tor is perfect (because it can't be!) but the more people use it the better it gets and it's certainly better than anything else, so please stop spreading FUD and encourage people to use it instead.
Also, it's unclear to me how Stealth helps at all with hiding the IP addresses of its participants... It claims to be "private" but the README doesn't say anything about network privacy...
The code doesn’t strike me as concerning itself with protecting privacy so much as changing who will get to log your traffic. Interesting effort though; I’ll hope for more details from them in the future!
Chill, bro. I said “seems hand-wavy” and “I’d love to be wrong”. I was hedging my bets and clearly indicating this was a surface-level read. I shouldn’t have to have a better alternative on deck to point out something in the codebase that didn’t seem to be privacy-friendly. No offense was meant.
Since you asked how I would do things: I would have had a clear and detailed security-specific document or section of the readme to detail in what ways it is peer-to-peer and in what ways it is private. I would have probably gestured towards the threat model I used when designing the protocols, but —- let’s be honest —- I’d probably be too lazy to document it adequately. As far as I can tell, there’s one paragraph in its developer guide on security and two paragraphs on peer-to-peer communication and I wasn’t able to get a good read on its concrete design or characteristics.
> Note that the DNS queries are only done when 1) there's no host in the local cache and 2) no trusted peer has resolved it either.
This wasn’t clear to me from my first spelunk through the readme or the docs. Are you affiliated with the project? Is there a good security overview of the project you know of?
> I mean, DNS is how the internet works. Can't do much about it except caching and delegation to avoid traceable specificity.
What I meant to say is, I was not so sure that the google public dns could be considered private. But nevermind on that, I can’t confirm their logging policies. I’m probably just paranoid about how easy google seems to build a profile on me. So yeah, as mentioned, just my initial read.
Hey, my comment wasn't meant in a defending manner...I'm just curious whether I maybe missed a new approach to gathering DNS data :)
I've seen some new protocols that try to build a trustless blockchain inspired system, but they aren't really there yet and sometimes still have recursion problems.
When I was visiting a friend in France I first realized how much is censored there by ISPs and cloudflare/google and others, so that's why I decided it might be a good approach to have a ronin here.
I totally agree that threat model isn't documented. Currently the peer to peer stuff is mostly manual, as there's no way to discover peers (yet). So you would have to add other local machines yourself in the browser settings.
Security wise there's currently a lot of things that are changing, such as the upcoming DNS tunnel protocol that can use dedicated other peers that are connected to the clearnet already by encapsulating e.g. https inside dns via fake TXT queries etc.
> public dns could be considered private
Totally agree here, I tried to find as many DoT and DoH dns servers as possible, and the list was actually longer before.
In 2019 a lot of dns providers went either broke or went commercial (like nextdns which now requires a unique id per user, which defeats the purpose of it completely)... But maybe someone knows a good DoH/DoT directory that's better than the curl wiki on github?
Thanks for following up with added info! I’ll look forward to seeing the project progress; It’s an area I’m super interested in. As far as naming systems better at privacy than DNS, I’m not aware of any serious options. Personally, I’m working on implementing something that hopes to improve the verifiability of naming resolutions, but thats a long ways off: https://tools.ietf.org/html/draft-watson-dinrg-delmap-02
In France for instance you can donate to laquadrature.net. They do a really great job given their small size. You will probably find similar associations in other European countries.