What is WebAssembly?
WebAssembly is a low-level bytecode for the web, similar to ARM or x86, that may be used as a compilation target for any language. It was created over the course of several years by a W3C group made up of representatives from a variety of browser, framework, and hardware suppliers. They all had a specific objective: The ability to run arbitrary binary code securely and with near-native performance.
All major browsers already support it, making it a viable target today.
WebAssembly’s definition is agnostic. Despite the fact that its name includes the word “Web,” it is intended to be a generic bytecode. This is what allows projects like Ethereum, Life, Nebulet, and WebAssembly for.NET to run in the absence of a browser. As Jay Phelps frequently points out, WebAssembly is neither Web nor Assembly and can be used in a variety of scenarios. It has the potential to become a true universal binary format for any type of programming in the future. Because it promises to open up the Web to other languages and frameworks, it holds a lot of promise, and is a hot topic in many online discussions and forums.
What does WebAssembly look like?
Because WebAssembly is a binary format, it cannot be read by humans, but it does have a textual representation that makes it easier to understand.
Like most assembler code, it is not easy to read, but it gives a good sense of how a stack-based virtual machine operates.
Most developers will never have to contact directly with WebAssembly, just as they will never have to interact directly with x86 64 or ARM64. It will primarily serve as a build target, alongside x86 and ARM64, in a drop-down menu.
As seen in the default main.js sample of WebAssembly Studio, consuming a WebAssembly module typically consists of a.wasm file and a JS glue file. To load WebAssembly code, the JS file must be present.
WebAssembly aims to address the challenges that make it difficult to optimize speed for large applications, as well as enable access to binary-producing languages and improve the security of running the generated code.
WebAssembly differs significantly from past attempts to run arbitrary binary code in the browser, such as Flash, Java applets, VBA, Silverlight, ActiveX, and others, in terms of security and portability. The inability to execute arbitrary memory locations, for example, is one of the security aspects. While this makes optionally JIT’ed languages (such as.NET based languages) more difficult to target, it also promises a more secure execution environment than its predecessors based on add-ins.
WebAssembly and .NET
Since the beginning of 2018, Microsoft has been working on a WebAssembly port of the Mono runtime. Using the Uno Platform as a point of comparison, the runtime appears to be as reliable as it is on iOS and Android, which is quite a feat.
The.NET Core Runtime (CoreRT) team is also making significant progress on a WebAssembly implementation of the.NET Native engine.
WebAssembly’s security features, combined with the inability to execute data portions of memory, make it difficult to run IL code. Just-In-Time (JIT) compilation is commonly used to generate code from data memory segments using the underlying platform’s instructions, which is subsequently executed by the CPU. The security constraint in WebAssembly is comparable to those in iOS and watchOS, which prohibit such compilation techniques. To get around this limitation, the Mono team already worked under those limits, and WebAssembly support will require the same technique.
The Mono interpreter is similar to a long-standing piece of code (mint), which was used in the early days of Mono when the JIT engine (mini) was not yet available. Its role is to execute IL instructions one by one on top of a natively compiled runtime. It enables IL code to run instantly in the appropriate environment at the expense of execution performance.
While this is a nice start, it includes a big switch for each and every accessible opcode in the IL standard. Browsers are having a difficult time passing via this hot execution path. It also doesn’t play well with CPU data caches, such as on devices with an i5 CPU or lower that have a small L2 cache.
Fortunately, this is simply a short-term issue. When Mono’s AOT is released, the code will be substantially faster right away, though how much quicker is unknown. The size of the resulting WASM binary is also unknown, and extrapolating from other AOT target CPU architectures that look similar can be challenging.
As a mixed execution mode, the interpreter mode will stay in Mono. This will enable scenarios of dynamic code generation utilizing Roslyn to work in non-JIT environments, as well as functional parts of the BCL like Expression compilation.