๐ (U+1D54F) decomposes[0] to X (U+0058) meaning that if you search for ๐ your search string will likely be automatically converted to the equivalent X.
Of course, while a search engine could conceivably index and search the entire Unicode codespace internet wide, such a task would likely be somewhat unrealistic and provide only limited upside.
How do you decide which characters to index? The current Unicode release (15.0) includes 149,186 individual characters. I suppose you can probably ignore U+237C (Right Angle with Downwards Zigzag Arrow) seeing as nobody seems to know what it denotes.[0][1]
Most search engines for languages like English are indexing words as opposed to characters so choices as to what characters are indexed are made as part of deciding which words to index.
Search engines for CJK languages do tend to work at the character level so a search for โSonaโ on a certain site run by (I think) Chinese people will turn up result for โPersonaโ.
I was involved with an A.I. startup where we had lots of meetings about what to do about all the strange Unicode characters and right now in Mastodon there is a lot of concern that screen readers will choke on ๐ฎ๐ง๐ข๐๐จ๐๐ ๐๐จ๐ฅ๐ ๐๐ก๐๐ซ๐๐๐ญ๐๐ซ๐ฌ while it doesnโt seem that difficult to squash them down to ordinary characters or treat them exactly as <b>unicode bold characters</b>