In .NET, You're supposed to use reflection for discovery and then generate the code, at runtime, in a Assembly which is loaded and JIT'Ted by the runtime just like any other library.
Reflection is fine, just never use it inside a tight loop/ hot code path.
I've written a database engine in C#/ .net and I quickly learnt what you can and can't use. Using the dynamic/ expando parts have insane memory costs.
You have to be very careful what language features you use. If you want performance then static methods are your friend, as are Generics (sometimes sealed classes), arrays & vectors (you start to realise how important CPU cache is). Memory streams & pointers are sometimes needed, but gains are often less than you'd think.
Visual Studio has some really good debugging tools, especially viewing the disassembly (hit a breakpoint then ctrl+alt+d) to see what is really happening.
I love Linq, but it can be awfully slow.
Lastly, when you run tests. Make sure you not only do multiple runs, but also pause in-between. I've seen the jit yield >20% improvements in micro-benchmarks
One thing to keep in mind, for readers--while LINQ can be awfully slow, it often isn't and should be your first stop (and then improving on it when you find hot paths). I build game engines in C#, and I happily use LINQ to get something done--then I go back, profile my code (anything you're not getting out of your profiler is likely mythical and at best lying to you), and replace as-needed.
Optimizing LINQ is a lot like optimizing Haskell (or to a lesser extent F#) and it isn't so much a minefield as it is dealing with that sort of abrupt paradigm-shift/learning curve wall of a mini embedded functional programming language.
I don't think a lot of people understand exactly where some of the eagerness versus laziness boundaries are (which are essentially monad boundaries, from a functional perspective). For instance, I've suggested for some time [1] that one of the strongest things a UX (whether an IDE or an extension to an IDE) could do to help is better mark those boundaries between IQueryable/IEnumerable/IList, as I don't think a lot of people see them. (I've long considered ToList somewhat harmful, especially; it is so painfully rarely the right tool for any task and yet is so easy for people new to LINQ to use as a crutch.)
Beyond that there is of course the general rules of optimizing any code: are you using the right data structure for the job and have you balanced caching the expensive operations (queries) against what isn't expensive and doesn't need to be cached.
Certainly a lot of optimizations I've seen needed over the years are simply: this lookup would be faster in a Dictionary or a HashSet or an ILookup, instead of an O(n) search through an IEnumerable or IList. (This should be especially obvious any time that you attempt a `join` of any sort against an IEnumerable or IList on either side of the `join` that you need a Dictionary, HashSet, or ILookup on one or both sides.)
That sort of data structure optimization is largely the same consideration when you are writing do/while/for/foreach loops directly as it is in the LINQ syntax and it can take a mental leap for developers sometimes to realize it that it is mostly the same problems, even if they sometimes "feel" different.
I guess I should have said that LINQ is a bit of a mine-field; it can generate some very arcane code/ expression trees from relatively simple predicates. I would have loved to have used it (would have saved me writing a query engine), there was simple too much performance overhead (especially memory usage).
While the GC will run under load, the JIT is treated as a luxury and as far as I'm aware, it will only run when there's spare CPU (it makes sense to not interrupt the program while it's busy).
If you follow the Build -> Test cycle, the JIT won't kick in (the JIT will run about 10 seconds after heavy load, even while it's running).
It's true that the JITted code it cached, this is lost every time you rebuild the application/ library). Speaking of cache, one thing I only came across recently is that if you install the library into the GAC, the JIT seems to be even more aggressive - the code runs faster than a standalone DLL (even with identical stress-testing/ wear-in).
Well I think the benchmarks show that there's quite a bit of nuance to it. Reflection generated code to get a property's value from its get accessor ends up 8.82x slower than what RyuJit does, because you're stuck with a method call rather than method inlining. ("GetViaProperty" is just reading from a field.)
That doesn't mean you can't reflect even more and get the backing field and generate code to access it, but now you've got your optimizing compiler hat on!
And even then, I think you still can't get 1:1 performance because you have to call the generated method vs. having it inlined.
Assuming the functions and types are known in advance, no virtuals etc: given something like `Main` calling `DoSomething` calling `GetPropertyValue` which is
public int GetPropertyValue(MyObject myObject) { return myObject.PropertyValue; }
When RyuJit compiles your application, it can see these methods and how they're called. `PropertyValue` can be inlined into a field access, `GetPropertyValue` can be inlined into `DoSomething` which in turn can be inlined into `Main`.
RyuJit can't see that `getDelegate` is the equivalent of `return myObject.PropertyValue`. It's a reference to some function. RyuJit has to compile calling an arbitrary method vs. calling `GetPropertyValue`.
Yes, I understand that. I've worked on JITs and fancier compilers in the (long) past, and I'm aware of inlining and devirtualization.
My question is what is special about the code that is written into memory by code generation that distinguishes it by code written into memory at program startup. I can't think of anything off the top of my head that would make inlining dynamically generated code impossible, short of possibly not wanting to toss away already jitted code
FWIW, the pre-RyuJIT .NET runtime was definitely capable of inlining and devirtualization. I've seen it do the optimization for IDisposable.Dispose calls, off the top of my head. I would expect RyuJIT is capable of doing this in more places.
I don't know if they've ever implemented this for delegates, though.
It was actually capable of inlining to the point where it could do some tricks that you'd normally expect only from C++. For example, if you use generics and structs rather than delegates to implement higher-order functions, like so:
class Program {
interface IFunc<T1, T2, TResult> {
TResult Invoke(T1 x1, T2 x2);
}
struct AddInt32 : IFunc<int, int, int> {
public int Invoke(int x, int y) {
return x + y;
}
}
static T FoldLeft<T, F>(T[] xs, F f) where F : IFunc<T, T, T> {
var res = xs[0];
for (int i = 1; i < xs.Length; ++i) {
res = f.Invoke(res, xs[i]);
}
return res;
}
static void Main() {
Console.ReadKey();
int[] xs = { 1, 2, 3, 5, 8 };
int res = FoldLeft(xs, new AddInt32());
Console.WriteLine(res);
}
}
I compiled and ran it with 3.5 SP1 x86 (the old 64-bit JIT wasn't good, and won't inline in this case). It didn't inline FoldLeft, but it did inline AddInt32 into the loop - this is from VS debugger disassembly:
for (int i = 1; i < xs.Length; ++i) {
017B0106 mov edx,1
017B010B mov edi,dword ptr [ecx+4]
017B010E cmp edi,1
017B0111 jle 017B011E
res = f.Invoke(res, xs[i]);
017B0113 mov eax,dword ptr [ecx+edx*4+8]
017B0117 add esi,eax
for (int i = 1; i < xs.Length; ++i) {
017B0119 inc edx
017B011A cmp edi,edx
017B011C jg 017B0113
}
Generic code working with reference types can be shared, but passing structs for generic parameters forces JIT compiler to generate separate instances of generic methods or classes for each combination of structs. From there inlining becomes trivial. But yes, being able to pull stuff like this is one of the cooler parts of .NET.
Yeah my wording is a bit loose. There's a lot of clever stuff that RyuJit etc. do for performance. I believe the JVM has even more clever stuff although I haven't dedicated much time to learning what it does better or differently.
I think it's nice to see this type of analysis. We all know that reflection is slow for most languages, but understanding the why gives me a great sense of how interesting the task of reflection is.
> I think reflection is "slow" is most languages unless you use a kind of preprocessor. Is there any exception?
Yes! I wrote a PhD on making reflection in Ruby fast. It is possible to make most reflection operations run with no peak time performance overhead at all compared to normal operations.
The main techniques needed are dynamic code compilation with speculation and deoptimisation, polymorphic inline caching, splitting and method inlining. Unfortunately I think the .NET JIT isn't dynamic (might be mistaken, not an expert on .NET) so can't do speculative optimisations.
Basically you see what method names have been used last time in a reflective call, and create a little call that's more like a conventional call using those names. Each time you do the reflective call you check the name against the list of names you have created conventional calls for, and use them if you can. You need to make string comparison fast, which can be done using a rope data structure. Then you need to be able to remove the check entirely if the string comes from somewhere you can control, like a constant. Then you need to inline the reflection method so you are just left with the synthetic conventional call.
So it isn't easy! Which is why it isn't usually done. But it's possible.
In Ruby you need to make it fast no matter how hard it is, as Ruby libraries tend to use reflection (they call it metaprogramming) in inner-loop operations.
> It is possible to make most reflection operations run with no peak time performance overhead at all compared to normal operations.
Hmm... but my discussions with people who have worked on Ruby execution engines say that Ruby's normal operations are slow (because Ruby is constrained to do many slow things for normal invocations). How does Ruby reflection compare to .NET reflection and how does Ruby invocation compare to .NET invocation?
I mean that for Ruby it is possible to compile at runtime
a + b
And
a.send('+', b)
Into the same thing - machine code like
add ra, rb, rx
jo
Except for the jo to detect overflow, this is as good as you will get from any compiler including .NET, Java, C and so on.
Ruby really isn't constrained to do anything slow for normal method invocations - you can optimise I think literally all of them away entirely if you have a JIT. For normal method invocations this all all solved back in the early 1990s in the research on making Smalltalk fast, but wasn't fully applied to Ruby until my work (and Topaz, Ruby on PyPy achieved similar results around the same time actually). I extended it to work with reflection (metaprogramming).
I think that this is still a reminant of the mentality that "interpreted languages are slow" when this is just a lazy way at looking at the problem. This hasn't been the case ever since I was in diapers. I've been going back and looking at old LISP machines and what kind of optimizations they've done to get those running fast. Some of the eairly voyagers were even powered by LISP (these being real-time space probe micro-controller-like systems that are far slower the the slowest ATMega on the market right now).
Everything can be optimized at a compiler or a runtime level. Even if you just scale it back to specific idom-optimizations (as you talk about in one of your talks you've done on your JVM work) you can tremendiously boost speeds by 200 to 300% just by writing operation-specific optimizations.
It's a sad state that because of this preceived "it's going to be slow" that no one seems to be attempting to implement things that Guy Steel and McCarthy implemented way back when. This sort of lazy thinking results in the "Just right it in C" mentality that much of the python community holds.
I'm glad at least you and the JVM people are working on this as this is quite important for all CS-focust disciplines.
The project Chris works on has pretty fast normal peak operation, and it's quite possible to get stable reflection in a JVM language to be as fast as a standard method call and inlined in the same way.
It's a meta language that works at compile time. It's not just text substitution like with GCC, so the syntax can be validated. And no need for "go gen" like silly tricks.
Languages that have reflection should do everything so that using reflection is the exception, not the norm. Some languages which boast about simplicity are often the worst offenders in practice, when it comes to reflection, just look at Go code out there, full of reflection everywhere, because the language is way too simplistic for a lot of use cases. Go even allows adhoc type definition at run time, out of thin air. Is this really what gophers mean by "simplicity" ? because the maintainers refuse to add any new syntax they now basically cram everything they need into the reflect package. Then dare say its use is "not idiomatic" when they abuse reflection themselves ?
Crystal's implementation wouldn't work for something like C#, though, because C# supports dynamic assembly loading at runtime and must provide reflective facilities for operating on types loaded from them. (I once wrote an extensibility system[1] and on-the-fly compiler for a game engine that took advantage of this, it was neat.)
Macros don't help you here and limiting yourself to "I can compile the whole blob at once" cuts off a lot of avenues of extensibility.
>> because the maintainers refuse to add any new syntax they now basically cram everything they need into the reflect package.
I think they want to try different avenues before to decide what features to add. Generators("go generate") are used a lot too but they are a pain to develop. Most of them still use templates or print statements instead of specilised packages such go/ast , go/types, go/parser etc to actually generate/print code. Usually you should use reflection to prototype and switch to generators only where the performance is really a big issue or where reflection can't help(i.e define new named types, import/use arbitrary packages/functions)
Alternatively, you build a type system that can express what developers actually need it to express and solve the problem without reflection at all. =)
That's easier said than done. There are tradeoffs on expressive type systems too. That being said I wouldn't mind more expressivity in the Go type system. I'm actually looking to implement my own small features(i.e optionals, variant types) in a Go superset.
Where do you think struct tags fit in all this ? do you really think go designers took the time to consider the tradeoffs of such feature ? I don't think so.
But putting struct tags in Go was a good idea that promote simplicity ... I prefer the magic of generics or pattern matching or covariance to insane features like this, thank you.
Java 7 has introduced a very specific API (java.lang.invoke) to be able to inline reflection calls.
So lookups are quite slow, but invocations are fully inlined.
java.lang.invoke API is fast to the point that for doing faster string concatenation in Java 9, Aleksey Shipilev has used this API instead of introducing a new opcode in the JVM [1].
Does anyone have any comparable analyses for Java? I would assume the performance penalties are similar. There are many popular libraries which rely on annotations (JAX-RS, Jackson, Guice/Spring DI) and this presumably uses reflection heavily, so I've often wondered what kind of impact this has vs not using annotations.
What would really make reflection fast is the dual operation of typeof(T) -> Type, that is, Type -> T.
Then you can do all kinds of neat but efficient things by exploiting the CLR's runtime generics [1]. This is something that would be easily done in the CLR itself, a sort of hidden universal method that could perform a single, maybe double, dispatch into the correct generic overload, but alas, we are left only to simulate this using poor imitations of inline caches.
Unity is limited by the fact that it uses an older version of .NET 2.0, or even preferably a .NET 2.0 subset. It can also be optimized by disabling exceptions. And most importantly on AOT-compiled platforms like iOS and consoles, it doesn't support dynamic JIT compilation and interfaces like Reflection.Emit, which some of the high performance JSON serialization libraries use for optimization.
So the question of "Why is reflection so slow?" is very interesting to me as a Unity iOS developer, since I'm forced to use it without all the fancy optimizations (or figure out how to avoid using it).
Some of the big fancy luxurious all dancing all singing JSON libraries intended for less restricted CLR platforms like Windows do all kinds of amazing tricks to reduce and optimize their use of reflection. Some of those have been stripped down and ported to Unity, but others haven't.
JSON.Net is one of the most advanced feature-rich C# JSON libraries, and it is free, but it is not the fastest. The free version doesn't run on Unity out of the box, but there is a striped down simplified version that runs on Unity available on the Unity Asset Store for $25.
FastJSON is quite fast true to its name, free, has a lot of advanced features, is deeply customizable, is highly optimized and benchmarked against other libraries, and there's a great in-depth article comparing it to other libraries and explaining its optimization techniques. It hasn't been ported to Unity, but that might be worth doing, since it's such a nice piece of work. I asked the author about it, and he said that its speed was the result of dynamic code generation features that Unity doesn't support. That would sacrifice some of the optimizations and advanced features, but still might be worth doing.
Full Serializer is a well written free JSON library with lots of useful features that is specifically designed for Unity from the start, which avoids using anything that will limit optimization, like anything higher than .Net 2.0 subset features, advanced reflection features, LINQ, code generation and even exceptions. So far, it seems like the best balanced compromise I've found between Unity's limitations and a full set of useful features, and it looks to me like well written but not overly complex code.
JSONObject is another free simple JSON library written for Unity, which I've been using for years, and while it works for basic stuff, I'm not very happy with it (which is why I'm looking around for alternatives). It's not particularly efficient (in speed or runtime size), has a terrible API, few advanced features, and some embarrassing bugs. (It doesn't correctly handle string escapes like \r \n \u1234 etc -- c'mon, there's a very simple explicit JSON spec that defines every bit of the syntax: follow it!)
Unity now has its own JSONUtility module that at first glance seems like it might be useful, but once you try to actually use it for anything in the real or virtual world, you hit a wall, because it's a wolf in sheep's clothing. Even worse than a wolf: it's layered directly on top of Unity's infamous serialization system, and inherits all its bizarre quirks and limitations, making it practically useless (and probably slow).
First, in order to begin to understand how truly fucked up Unity's JSONUtility library truly is, you have to understand how truly fucked up Unity's serialization system is, and this great classic blog posting by Lucas Meijer bravely and honestly touches the surface, then falls off into the deep end:
For many Unity developers, an important reason for using a JSON serialization library is to avoid using Unity's serialization system. So the idea of JSONUtility being layered on top of the system they were trying to get away from doesn't appeal to them.
"This is why I am moving away from scriptable objects style services and into using static clr objects that load their configuration from csv / json txt files. I am loosing editor integration, but Im gaining peace of mind that unity wont blow up and corrupt my array of 100 items, levels, whatever." [–]nventimiglia Expert 2 points 2 years ago https://www.reddit.com/r/Unity3D/comments/2e9vlg/unity_seria...
I'd prefer to build on top of Unity's built-in optimized JSON parser if that were possible, layering a system like Full Serialization on top of it without tangling with Unity Serialization, but the current JSONUtility API makes that impossible. Unity should publish the source for JSONUtility to the community (and throw in Serialization too please, so we can at least understand how it works when we're forced to interact with it), and accept pushes, please!
The main issue with Unity's JSONUtility is that it doesn't provide a generic polymorphic runtime representation of JSON objects (like Full Serializations's fsData), since it only maps between JSON and C# objects.
So in order to read or write a JSON object that has a key "foo" there MUST be a C# class at compile time that has a serializable field named "foo".
A C# dictionary containing a key "foo" does not have a field named foo.
And there is no way to directly map between JSON and a variant type like fsData, because of the limitations of Unity's serialization system. I tried implementing a variant type like fsData (actually copying the source to fsData and changing the class name), and then implementing the ISerializationCallbackReceiver interfaces on it. But I hit a wall since On{Before,After}Deserialize has no access to the JSON dictionary, because the only keys of the dictionary that get copied to the C# objects are keys whose names are the same as statically compiled C# object fields. And On{Before,After}Serialize can't directly create the JSON dictionary, it must just fill out public fields whose compile-time names became the names of keys in the JSON object, so the JSON object will always have the same keys as C# object fields defined at compile time.
As Lucas describes in his blog posting, what you have to do in the OnBeforeSerialize callback is to prepare the object by copying its private non-serializable fields into public fields that the serializer can deal with, and those fields must be given names at compile time. So for example you could make a proxy wrapper adaptor for serializing a dictionary, that had a field "keys" that was an array of strings, and a field "values" that was an array of values, and in OnBeforeSerialize you copy all the keys of the dict to "keys", and all the values of the dict to the "values" array. (There's no way this will ever run fast, but I'm just following the API to its logical conclusion to show how futile it is to even try.)
But then you run into the other problem of Unity serialization: no support for polymorphic arrays or null values or even dictionaries, and especially not polymorphic dictionaries.
So you could write out an array of "keys" since they are all the same type, string. But each item of the "values" array would have to be the same type. i.e. a polymorphic Dictionary<string, object> would have to write out a values array of [new object(), new object(), new object()] which would not be very useful, or all its values would have to be the same type like Dictionary<string, Vector2> which would write the values out as [{x: 1, y: 2}, ...], but there would be no way to express a polymorphic array. It all boils down to having to know all possible keys of the JSON dict at compile time, and defining C# properties of those names, because it is those properties that the Unity serialization system loops over to get the keys of the JSON object.
Unity serialization also does not support null values in arrays, so if you wanted to write out a field of a particular type whose value was null, it would actually write out an default value. And if the default value was a class that contained a possibly-null reference to its same class, like struct Node { Node leftChild; Node rightChild; }, it would recursively write out a whole tree of dummy default objects until it bottomed out at recursion level 7, and you'd get potentially hundreds or thousands of dummy objects. (Read Lucas's article if you don't believe me -- Unity's serialization system is really brain damaged!)
So trying to piggyback on top of Unity's serialization system isn't such a good idea if you want to handle reading and writing arbitrary JSON structures, which I do.
Also Unity serialization does not even directly support writing out dictionaries, let alone dictionaries with polymorphic values.
I wish I could see the source code, so it was easier to analyze and describe its limitation, and I think there's a lot of value in using an open source library like Full Serializer where we have access to its complete source code and can understand and fix its problems, than flying blind with Unity's proprietary serialization system and proprietary JSON serializer that rides on top of it.
Again, here's Lucas's blog post, which the "FUBAR" reddit posting refers to: https://blogs.unity3d.com/2014/06/24/serialization-in-unity/ -- scroll down to the end where it gets really ugly, and read the footnote at the bottom and discussion in the comments, where he explains how his example would have sent Unity into an endless loop 5 + 2 = 7 years ago, but now it bottoms out at the arbitrary depth of 7 levels of recursion, resulting in only hundreds of dummy objects. The cycle that causes trouble he's referring to isn't just a cycle in the graph of objects, it's actually a cycle where a class member refers to any object of the same class! (Like a tree of nodes).
Lucas described how hard this problem is and how deeply it is embedded in Unity's built-in presumptions:
"LUCAS MEIJER JUNE 25, 2014 AT 2:44 PM / "
"@all: what rene said."
"it’s not technically impossible to ever support null. it’s a lot of non trivial work though. We’ have to somehow serialized “inline objects” with a bool wether or not this one is null. it affects how you interact with such objects with the SerializerProperty class, as well as the prefab system. (if the “isnull” bit is marked, but a prefab sets a value anyway, what do you do). none of these are theoretically unsolvable. you would however still run into the depth limit problem, because of the way we do backwards compatible loading. When we do backwards compatible loading, we at runtime, generate a typetree for a certain object. this concept of a typetree is actually a pretty core one in unity, and already should give a good feeling on how many of our systems are built around assumptions on how datalayout is static. we indeed have the concept of a collection, but other than that that’s it. so when we generate a typetree, we actually create an object, then we serialize it in a special typetree creation mode. if you have class cycles, the typetree would still grow very big. (we cap it to 7 levels too)."
"so yeah, a ton of work. up until now we have prioritized other things, and I don’t see that changing in the near future. (I actually spent a week or two going down this rabbit hole for both null and polymorphism when I did the serialization improvements for 4.5, thinking I’d be able to get something in, but I ended up with the conclusion that it would take a lot more time than that, and that my time was better spent providing things like serialization callbacks, and other things in Unity that I feel could really use some loving)."
----
Where the rubber hits the road in a typical JSON message consuming Unity app trying to use JSONUtility, you will find that it has to actually parse the JSON twice, first into a generic Message { string name; } msg to get the msg.name, then switch on the msg.name to parse the JSON again into a more specific WhateverMessage { string name; float foo; bool bar; } or whatever.
"Using FromJson() when the type is not known ahead of time:"
"Deserialize the JSON into a class or struct that contains ‘common’ fields, and then use the values of those fields to work out what actual type you want. Then deserialize a second time into that type."
But what if you wanted a message that contained a payload of an arbitrary JSON object or polymorphic array? Or what if you wanted to send a JSON object whose keys were content id strings -- you would have to define a C# class with a property for every possible content id you would ever want to put into the dictionary! And that would be impossible to know (and impractical to implement) beforehand.
"Supported Types
The API supports any MonoBehaviour-subclass, ScriptableObject-subclass, or plain class/struct with the [Serializable] attribute. The object you pass in is fed to the standard Unity serializer for processing, so the same rules and limitations apply as they do in the Inspector; only fields are serialized, and types like Dictionary<> are not supported."
"Passing other types directly to the API, for example primitive types or arrays, is not currently supported. For now you will need to wrap such types in a class or struct of some sort."
What you need to represent arbitrary JSON data at runtime is a variant object like fsData. Here's what fsData looks like -- it only has a private object _value member, whose type it figures out at runtime. For JSON objects, its value is a Dictionary<string, fsData>, and for JSON arrays, its value is a List<fsData>.
But there is no way for the Unity serializer to map between JSON objects and a variant type with a polymorphic object field, or even a dictionary like Dictionary<string, fsData> ... fsData's "private object _value" field is invisible to the serializer so any fsData will get serialized as {} (not just because _value is private, but because its type "object" doesn't qualify for serialization):
Q:
What does a field of my script need to be in order to be serialized?
A:
+ Be public, or have [SerializeField] attribute
+ Not be static
+ Not be const
+ Not be readonly
+ The fieldtype needs to be of a type that we can serialize.
Q:
Which fieldtypes can we serialize?
A:
+ Custom non abstract classes with [Serializable] attribute.
+ Custom structs with [Serializable] attribute. (new in Unity4.5)
+ References to objects that derive from UntiyEngine.Object
+ Primitive data types (int,float,double,bool,string,etc)
+ Array of a fieldtype we can serialize
+ List<T> of a fieldtype we can serialize (edited)
"The bane of the iOS programmers life, when working with reflection in Mono, is that you cant go around making up new generic types to ensure that your reflected properties and methods get called at decent speed. This is because Mono on iOS is fully Ahead Of Time compiled and simply cant make up new stuff as you go along. That coupled with the dire performance of Invoke when using reflected properties lead me to construct a helper class."
"This works by registering a series of method signatures with the compiler, so that they are available to code running on the device. In my tests property access was 4.5x faster and method access with one parameters was 2.4x faster. Not earth shattering but every little helps. If you knew what you wanted ahead of time, then you could probably do a lot better. See here for info:"
"You have to register signatures inside each class Im afraid. Nothing I can do about that."
If you need to extend this to wrapping member invocations from classes without using Reflection.Emit you can do so by creating a series of compiler hints that can map a class and a function parameter list or return type.
Basically you need to create lambdas that take objects as parameters and return an object. Then use a generic function that the compiler sees AOT to create a cache of suitable methods to call the member and cast the parameters. The trick is to create open delegates and pass them through a second lamda to get to the underlying hint at runtime.
You do have to provide the hints for every class and signature (but not every method or property).
I've worked up a class here that does this, it's a bit too long to list out in this post.
In performance testing it's no where near as good as the example above, but it is generic which means it works in the circumstances I needed. Performance around 4.5x on reading a property compared to Invoke.
It seems a bit silly to me that the author uses number of method calls (which is only like 4?) and lines of codes as a metric for how slow the function is.
Yeah that's just to illustrate it's doing a lot of work compared to calling the property without reflection (which isn't even a method call as its inlined)