<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Aestus]]></title><description><![CDATA[3D character animation research blog. If it moves, we can skin it!]]></description><link>https://solaire.cs.csub.edu/aestus/</link><generator>Ghost 4.35</generator><lastBuildDate>Sun, 19 Oct 2025 05:44:11 GMT</lastBuildDate><atom:link href="https://solaire.cs.csub.edu/aestus/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Voice chat in Unity and the problems therein]]></title><description><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<p>If, by the tribunes&apos; leave, and yours, good people,<br>
I may be heard, I would crave a word or two;<br>
The which shall turn you to no further harm<br>
Than so much loss of time.</p>
</blockquote>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><div style="text-align: right"><a href="https://www.opensourceshakespeare.org/views/plays/play_view.php?WorkID=coriolanus&amp;Act=3&amp;Scene=1&amp;Scope=scene&amp;LineHighlight=2093#2093">Coriolanus: Act III, Scene 1</a></div><!--kg-card-end: html--><p>So. It&apos;s been a while! This was</p>]]></description><link>https://solaire.cs.csub.edu/aestus/voice-chat-in-unity-and-the-problems-therein/</link><guid isPermaLink="false">666b7b5881b42f05b3e95d13</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Fri, 14 Jun 2024 00:23:04 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<p>If, by the tribunes&apos; leave, and yours, good people,<br>
I may be heard, I would crave a word or two;<br>
The which shall turn you to no further harm<br>
Than so much loss of time.</p>
</blockquote>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><div style="text-align: right"><a href="https://www.opensourceshakespeare.org/views/plays/play_view.php?WorkID=coriolanus&amp;Act=3&amp;Scene=1&amp;Scope=scene&amp;LineHighlight=2093#2093">Coriolanus: Act III, Scene 1</a></div><!--kg-card-end: html--><p>So. It&apos;s been a while! This was a busy year for MeXanimatoR. I&apos;ll need to write a wrap-up post for the 2023-24 academic year. But first, I&apos;d like to go over some of the details in the voice chat system&apos;s ongoing development process. Be forewarned, this will be more documentative than instructional. The journey <em>is</em> the destination, or something like that.</p><p>As a reminder, MeXanimatoR started as an asynchronous experience: recording a player&apos;s body movement and audio as a digital performance allows you to play it back and perform as another player in the scene, casting and layering the performances in a layered fashion. Microphone access and recording as an AudioClip, then saving to storage has been functional for a while. By playing the AudioClip from the recorded player&apos;s head position and using 3D sound, the spatialization of audio makes you feel like you&apos;re really sharing a space with another person. </p><p>Incorporating live multiplayer between separate devices necessitates a system for transmitting voice. Body movement is synchronized using NetworkBehaviour transforms as part of the player objects, but there was no system in place for broadcasting voice. This was fine, at first. Initial tests and demos all made use of the same physical space for multiple players, close enough in proximity that they can simply hear one another. This presents other problems related to synchronization between play spaces, physical bodies, and avatar placement, but that&apos;s another post on my todo list related to co-location in VR.</p><p>Back to voice chat over the network, with the assumption that players are in a separate space: how do we do it? My first approach to this was to try <a href="https://assetstore.unity.com/packages/tools/audio/dissonance-voice-chat-70078">Dissonance</a>, a plugin in the Unity Asset Store. In the development of MeXanimatoR, I quite often looked at existing solutions to solve portions of the system whenever applicable. This choice in itself is a fulcrum of thinking in programming: when should we approach a problem as a configuration of existing tools, and when should we really dive into the details and develop our own system? This Twitter post has an interesting perspective on it:</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">You either concieve of programming as data processing, or you concieve of it as the stitching together of black boxes (libraries and services).<br><br>Programming education should teach you how to make behavior emerge via data processing.<br><br>BUT, as you begin learn to program, you can&apos;t&#x2026; <a href="https://t.co/BmOLNAgvxP">pic.twitter.com/BmOLNAgvxP</a></p>&#x2014; Hasen Judi &#x1F1EE;&#x1F1F6; &#x1F1EF;&#x1F1F5; (@Hasen_Judi) <a href="https://twitter.com/Hasen_Judi/status/1788018054808702986?ref_src=twsrc%5Etfw">May 8, 2024</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

</figure><p>That said, knowing how and when to apply both techniques is important. But Dissonance comes highly regarded in just about every Unity community I&apos;ve encountered, and it solves a ton of problems. Our networking system comes from <a href="https://assetstore.unity.com/packages/tools/network/fish-net-networking-evolved-207815">Fish-Net</a>, another plugin, and there&apos;s a community-provided integration between the two. Huzzah! I installed and configured the plugin, and after an hour or so of tinkering, I had voice chat coming through on two local instances! On that note: testing network functionality can be tricky. I&apos;ve been using <a href="https://github.com/VeriorPies/ParrelSync">ParrelSync</a> to clone the project for running separate Unity Editor instances with one as host and the other as client, and this has been the fastest and easiest system. </p><p>So here&apos;s where the issues began. Despite being in a LAN environment with tiny frames (10ms), voice chat with Dissonance creates a latency of something like 450ms. It&apos;s wild! Where&apos;s all this coming from? Especially when the estimated latency should be much better: <a href="https://gist.github.com/martindevans/92a9de2c74e3774b95fc897a3192e9d9">this writeup</a> claims a latency of roughly 75ms while also addressing that Unity&apos;s going to add around 100ms. While 150-175ms <em>is</em> better than what I was encountering, it&apos;s still rather bad for timing-critical performances. And as it happens, Unity&apos;s default <a href="https://docs.unity3d.com/ScriptReference/Microphone.html">Microphone</a> API has some fairly significant lag, too. Putting something like this in your Start() function:</p><!--kg-card-begin: markdown--><pre><code>var audio = GetComponentInChildren&lt;AudioSource&gt;();
audio.clip = Microphone.Start(&quot;Microphone&quot;, true, 10, 48000);
audio.loop = true;
while (!(Microphone.GetPosition(null) &gt; 0)) { }
audio.Play();
</code></pre>
<!--kg-card-end: markdown--><p>lets the player hear their own voice, with a delay of almost 1 second. I came across a few other posts talking about this issue, many of whom were concerned with Quest 2 development, so it&apos;s a nice indicator of standalone VR development. But I&apos;ve been on my Windows machines up to this point and dealing with the same problem!</p><p>By default, Dissonance uses the Unity Microphone calls to handle voice access. They offer an <a href="https://assetstore.unity.com/packages/tools/integration/dissonance-for-fmod-recording-213412">FMOD plugin</a> that is supposed to help with this kind of delay. I&apos;ve never worked with <a href="https://www.fmod.com/">FMOD</a> before, but the Dissonance devs state that it can provide faster access to the recording device and avoid the overhead from Unity&apos;s Microphone call. In trying it out, the delay did improve, but we were still around something like 200ms. Something else was going on. Turning on debug mode in Dissonance, I started getting warnings that echoed closed issues on the Dissonance GitHub: &quot;<a href="https://github.com/Placeholder-Software/Dissonance/issues/122">encoded audio heap is getting very large (46 items)</a>&quot; and the like. Into the past issues and Dissonance Discord I dove. A Discord thread referred to a similar problem that just so happened to involve the Fish-Net plugin and the community-provided integration layer. There was also an issue with player position tracking for correct audio spatialization related to network player Ids not being available at the right moment. I managed to fix this by implementing my own component for the IDissonancePlayer interface, but it did speak of other issues that might appear. </p><p>As a test, I made a new project that used Dissonance and <a href="https://mirror-networking.com/">Mirror</a> for the network system, which has an integration layer provided by Dissonance. Combined with FMOD, this did perform much better: I think it was around 60-70ms. Mirror was actually my first choice for MeXanimatoR&apos;s networking system back when I started toying around with it. There were some issues as summarized in this amusing commit history:</p><figure class="kg-card kg-image-card"><img src="https://solaire.cs.csub.edu/aestus/content/images/2024/06/image.png" class="kg-image" alt loading="lazy" width="899" height="338" srcset="https://solaire.cs.csub.edu/aestus/content/images/size/w600/2024/06/image.png 600w, https://solaire.cs.csub.edu/aestus/content/images/2024/06/image.png 899w" sizes="(min-width: 720px) 720px"></figure><p>Well, crud. I really didn&apos;t want to go back to Mirror from Fish-Net after having developed a considerable number of networking components for it. So began the old, dark thoughts once more: what if we <em>didn&apos;t</em> use Dissonance at all? We&apos;d lose all of its (very nice and helpful) features at the cost of having to make some ourselves, but could be handle this in the data-processing manner and not depend on the laggy black box?</p><p>That brings us up to speed. Over the last week, I&apos;ve been toying with FMOD for Unity. At startup, I create a Sound for recording and a Sound for playback. The latency here is much, much better - around 20-30ms between speaking and hearing yourself. Not so much of a <a href="https://www.clicktorelease.com/code/speech-jammer/">speech jammer</a>, but I&apos;ll also look into <a href="https://swharden.com/csdv/audio/naudio/">NAudio</a> as another option. So far it&apos;s showing improvements that I was hoping for with the Dissonance FMOD plugin. </p><p>At the moment, here&apos;s what the system can do:</p><ul><li>Start recording and playback for latency testing.</li><li>Access and copy samples from the sound recording on a per-frame basis.</li><li>Save the total recording to storage as an .ogg or .wav file.</li><li>Encode the per-frame samples with Vorbis.</li></ul><p>What&apos;s left to try:</p><ul><li>Transmitting encoded samples via the network.</li><li>Decoding Vorbis samples from the network back to PCM</li><li>Playing decoded samples live.</li><li>Measuring the latency in this system.</li></ul><p>There&apos;s still quite a ways to go, but I&apos;m hoping to have most of these things testable before taking a vacation next week. Oh, ambition, you love to take such huge bites. </p><p>Direct access has its own cost - we lose some of the conveniences provided by Dissonance, such as audio quality configuration, noise suppression, background sound removal, and adding these back in takes time (both for development and as little increments that add to the voice system&apos;s latency). For measuring latency, I&apos;ve just been using OBS to record the microphone and the desktop audio to separate tracks. Then I extract the tracks with ffmpeg and compare the waveforms in Audacity. It&apos;s a bit of a manual process (the coder&apos;s anathema), but it works for the moment. </p><p>That&apos;s all I&apos;ve got for now! I&apos;m hoping to write more over the summer, if only to try and keep track of all the configurations as I go through them. </p>]]></content:encoded></item><item><title><![CDATA[Computing the area of a regular infinity-gon]]></title><description><![CDATA[<p>This Old Post: from 2012 in my WordPress blogging era. </p><p>One of my favorite parts about calculus was learning about integrals. The mental leap I had when I realized you could compute the exact area under the curve if only you could use an infinitely small $dx$ in a Riemann</p>]]></description><link>https://solaire.cs.csub.edu/aestus/computing-the-area-of-a-regular-sided/</link><guid isPermaLink="false">6554840097397f05765f42b7</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Wed, 15 Nov 2023 09:16:06 GMT</pubDate><content:encoded><![CDATA[<p>This Old Post: from 2012 in my WordPress blogging era. </p><p>One of my favorite parts about calculus was learning about integrals. The mental leap I had when I realized you could compute the exact area under the curve if only you could use an infinitely small $dx$ in a Riemann sum is a particularly fond memory. That marks a major shift in my understanding of mathematics and, consequently, how I view the world around me (if you were wondering, probability theory, theory of computation, and JRPGs account for most of my other shifts). The following is a byproduct of post-calculus Nick.</p><p><br>If you&apos;re anything like me, occasionally you need to compute the area of a polygon. Sometimes it&apos;s for finding the barycentric coordinates of a point in a triangle; other times it&apos;s used to construct a function that selects uniformly random points from an irregularly-shaped polygon surface. The right area computation method depends on the circumstances. If it is known that a polygon will never deform, you can pre-compute the area before doing anything exciting with it. Otherwise, the area will have to be re-computed any time the polygon reports a significant deformation. If the polygon is convex (whether by construction, or verified to be so during runtime, or whatever), you can treat it like a triangle fan and find the sum of all the triangles in the fan. For n-sided <a href="http://en.wikipedia.org/wiki/Regular_polygon">regular polygons</a>, this method should also work. Pick a vertex $v_0$ and find the area of the triangle between $v_i$ and $v_{i+1}$ for $i$ between 1 and $n - 2$. The triangle fans would look something like this:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://solaire.cs.csub.edu/aestus/content/images/2023/11/fan.gif" class="kg-image" alt loading="lazy" width="298" height="255"><figcaption>Triangle fans all connected to a vertex on the perimter</figcaption></figure><p>Note the variety in triangle size. You have to compute all n triangle areas to get the proper sum. If you recognize that most triangles have a mirror on the other side of the fan, you can cut this down to about $n / 2$ area calculations. However, if you picked $v_0$ be the polygon&apos;s center and iterated over $i$ from 0 to $n$ (where $v_o = &#xA0;v_n$), the fan would look like this:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://solaire.cs.csub.edu/aestus/content/images/2023/11/center.gif" class="kg-image" alt loading="lazy" width="280" height="255"><figcaption>Triangle fans all connected at the center.</figcaption></figure><p>Those triangles look much more uniform. If you know the area of a single triangle as $A$, the polygon&apos;s total area is $nA$, since there are $n$ such triangles in the regular polygon. One float multiplication can be much nicer than multiple float additions. You know, because these ideal polygon cases pop up all the time and you want your code to be efficient ;p</p><p>To find $A$, note that the triangles in the regular polygon&apos;s fan are isosceles. The two equal sides are of length $r$, the regular polygon&apos;s radius. The angle between these two sides is $2&#x3C0; / n &#xA0;= &#x3B8;$. A single triangle is shown below with labels:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://solaire.cs.csub.edu/aestus/content/images/2023/11/Triangle.png" class="kg-image" alt loading="lazy" width="433" height="402"><figcaption>Isosceles triangle in the regular polygon</figcaption></figure><p>The area for this triangle, and the total area of the regular polygon, are computed as such:</p><figure class="kg-card kg-image-card"><img src="https://solaire.cs.csub.edu/aestus/content/images/2023/11/Formulas.png" class="kg-image" alt loading="lazy" width="399" height="418"></figure><p>So in the gifs posted above, you&apos;ll notice that a regular polygon starts to look mighty circular when $n$ grows large. For a radius $r$ and center $c$, a circle is the set of points that are distance $r$ from $c$. This is an infinite set, so a circle has an infinite number of points. Since a regular polygon&apos;s points are equidistant from its center, it seems reasonable to think of circles as regular polygons with infinitely many sides. For an infinity-gon, the area formula above should evaluate to &#xA0;$&#x3C0;r^2$. Except, you can&apos;t really substitute infinity for $n$, or you get infinity * 0, which is undefined. Like most cases when dealing with infinity, it&apos;s better to look at the limit as $n$ approaches infinity!</p><p>It&apos;s been a few years since I spoke fluent <a href="https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule">L&apos;H&#xF4;pital&apos;s rule</a>, so I turned to <a href="http://www.wolframalpha.com/input/?i=limit+as+n+approaches+infinity+of+%28n+*+r%5E2+*+sin%28pi+%2F+n%29+*+cos%28pi+%2F+n%29%29">WolframAlpha </a>for my limit-solving needs. You&apos;ll see that the limit of the area function as n approaches infinity evaluates to &#xA0;$&#x3C0;r^2$, which is a pretty cool result to get. I was really excited, so I signed up for the free WolframAlpha Pro trial to download the step-by-step results to share here. Free users can see up to 3 step-by-step solutions per day, so you can check it out on the WolframAlpha page, too.</p><figure class="kg-card kg-image-card"><img src="https://solaire.cs.csub.edu/aestus/content/images/2023/11/WolframAlpha-limit_as_n_approaches_infinity_of__n___r_2___sin_pi___n____cos_pi___n____Limit__2012_12_31_0710.png" class="kg-image" alt loading="lazy" width="597" height="876"></figure><p>That&apos;s some damn fine limit substitution! So there it is. Considering circles as infinity-gons is cooler, and the math checks out, too. The <a href="http://en.wikipedia.org/wiki/Area_of_a_circle#Rearrangement_proof">rearrangement proof</a> and <a href="https://en.wikipedia.org/wiki/Area_of_a_circle#Triangle_proof">triangle proof</a> for finding the area of a circle, however, are still way cooler.</p>]]></content:encoded></item><item><title><![CDATA[VS Code + SSH FS lagging]]></title><description><![CDATA[<p>If you&apos;re using VS Code and the SSH FS plugin to connect to Artemis/Odin and you&apos;ve recently noticed significant lag while opening and saving files, you&apos;re not alone. The issue seems to have started with recent updates to either VS Code or the</p>]]></description><link>https://solaire.cs.csub.edu/aestus/vs-code-ssh-fs-lagging/</link><guid isPermaLink="false">64518eb23986ac0521112f8b</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Tue, 02 May 2023 22:32:34 GMT</pubDate><content:encoded><![CDATA[<p>If you&apos;re using VS Code and the SSH FS plugin to connect to Artemis/Odin and you&apos;ve recently noticed significant lag while opening and saving files, you&apos;re not alone. The issue seems to have started with recent updates to either VS Code or the SSH FS plugin version. There&apos;s a GitHub issue for SSH FS that discusses this. Here&apos;s a link to the <a href="https://github.com/SchoofsKelvin/vscode-sshfs/issues/380" rel="noopener noreferrer">thread</a>, and here&apos;s a link to a <a href="https://github.com/SchoofsKelvin/vscode-sshfs/issues/380#issuecomment-1519562162" rel="noopener noreferrer">comment</a> in the thread referencing installing specific versions to avoid the issue.</p><p>The easiest way for me to resolve this was to rollback to previous versions of both VS Code and SSH FS. So far on Windows 10 and 11, VS Code version <a href="https://code.visualstudio.com/updates/v1_76" rel="noopener noreferrer">1.76.2</a> and SSH FS version 1.25.0 have been fast and stable. You&apos;ll most likely want to use the User download link for Windows rather than the System download link. The main difference is User installs to your account&apos;s AppData folder and does not need admin rights, while System installs to your Program Files directory and may need admin rights.</p><p>To roll back to a previous version of SSH FS, find it in the Extensions tab on VS Code and click the arrow next to Uninstall, then choose Install Another Version and choose 1.25.0:</p><figure class="kg-card kg-image-card"><img src="https://solaire.cs.csub.edu/aestus/content/images/2023/05/image.png" class="kg-image" alt loading="lazy" width="814" height="345" srcset="https://solaire.cs.csub.edu/aestus/content/images/size/w600/2023/05/image.png 600w, https://solaire.cs.csub.edu/aestus/content/images/2023/05/image.png 814w" sizes="(min-width: 720px) 720px"></figure><p>You may also want to turn off automatic updates for VS Code. In File &gt; Preferences &gt; Settings, type update in the Search bar and change the method to none or manual. </p><figure class="kg-card kg-image-card"><img src="https://solaire.cs.csub.edu/aestus/content/images/2023/05/image-1.png" class="kg-image" alt loading="lazy" width="881" height="227" srcset="https://solaire.cs.csub.edu/aestus/content/images/size/w600/2023/05/image-1.png 600w, https://solaire.cs.csub.edu/aestus/content/images/2023/05/image-1.png 881w" sizes="(min-width: 720px) 720px"></figure><p>Maybe once the semester&apos;s over, I&apos;ll go back to the latest versions of these and see if the issue has been fixed. But in the last few weeks of busyness, waiting &gt; 20 seconds to view a folder&apos;s content, open a file, or save changes is just too much to bear. </p>]]></content:encoded></item><item><title><![CDATA[Networking in Unity]]></title><description><![CDATA[<p>The ever-changing standards of network support in Unity has made me more apprehensive about this working on this feature than any other one. For two prior projects, I used <a href="https://docs.unity3d.com/Manual/UNet.html">UNet</a>, but that&apos;s been deprecated for a while. For a game jam, we used <a href="https://www.photonengine.com/pun">Photon</a>, and for <a href="https://www.playtheknave.org/">Play the</a></p>]]></description><link>https://solaire.cs.csub.edu/aestus/networking-in-unity/</link><guid isPermaLink="false">63c12ebcdb7a80056d07dcb2</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Fri, 13 Jan 2023 10:36:28 GMT</pubDate><content:encoded><![CDATA[<p>The ever-changing standards of network support in Unity has made me more apprehensive about this working on this feature than any other one. For two prior projects, I used <a href="https://docs.unity3d.com/Manual/UNet.html">UNet</a>, but that&apos;s been deprecated for a while. For a game jam, we used <a href="https://www.photonengine.com/pun">Photon</a>, and for <a href="https://www.playtheknave.org/">Play the Knave</a>, we had a custom solution implemented as a .NET plugin. Granted, there we only really used localhost connections to let the game and KinectDaemon talk to each other and upload game files to the server. </p><p>Before jumping in, I spun my wheels for a bit researching current solutions. <a href="https://docs-multiplayer.unity3d.com/netcode/current/about/index.html">Netcode for GameObjects</a> doesn&apos;t seem quite ready, but I had heard good things about <a href="https://github.com/MirrorNetworking/Mirror">Mirror</a> as a spiritual successor to UNet. But, I hit some issues that couldn&apos;t be resolved after a couple days of debugging, and I wasn&apos;t that deep into integration, so I jumped ship and am trying out <a href="https://github.com/FirstGearGames/FishNet">Fish Networking</a>. Special thanks to this <a href="https://docs.google.com/spreadsheets/d/1Bj5uLdnxZYlJykBg3Qd9BNOtvE8sp1ZQ4EgX1sI0RFA/edit#gid=127892449">Google Sheet</a> for helping me navigate the options. </p><p>So far, it&apos;s been promising! I&apos;ve been able to get two editors running and connected to each other with player movement being properly broadcast. At the moment, I&apos;m testing how well it works with a VR headset. Good news - the Quest 2 connects to an Editor session and we can see each other&apos;s movement! Bad and funny news - the Quest player can locally control both their avatar and the other player&apos;s. There&apos;s also a bit of a strobing issue on the Editor side; the VR player&apos;s movement is being shown on the networked player, but the position data is intermittently popping in and out of place. I&apos;ve not dug too deeply into the configuration to find the cause, but it&apos;s nice to make rapid progress. Soon, I&apos;ll need to start experimenting with dedicated servers and headless builds to have a server instance running on Solaire. I&apos;ve also got my eye on the plugin <a href="https://assetstore.unity.com/packages/tools/audio/dissonance-voice-chat-70078">Dissonance</a> (a namely homage to Discord?) for voice chat, which supports Fish Networking as a transport. </p><h2 id="prototypes-and-tests">Prototypes and tests</h2><p>Prior to any of this in Unity, I spent some time over winter break experimenting with peer-based networking in the browser. Different ball game - as is demonstrated when you look at Unity networking plugins and every single one has caveats and workarounds listed for WebGL builds - but nothing too crazy. These tests were basically &quot;how do I get 2D player data shared between multiple instances using divs and some client-side JS?&quot; And I built a few different versions, each with client-side JS:</p><ul><li>PHP middleware, MariaDB storage (lol)</li><li>PHP middleware, PHP session storage (forcing all clients to access the same session with a server-side predetermined ID)</li><li>Node middleware using <a href="https://socket.io/">socket.io</a></li><li>Node middleware using <a href="https://peerjs.com/">PeerJS</a></li><li>Node middleware using <a href="https://geckos.io/">geckos.io</a></li></ul><p>I treated these as a warmup for (and distraction from) Unity-side networking. There&apos;s something very satisfying about VS Code, a terminal, and a browser for your dev env. And let us never forget the incredibly powerful browser DevTools &#x1F64F;&#x1F3FC;. It did bring up some interesting questions:</p><ul><li>For small groups ( &lt; 10 players) in collaboratory/social VR, do we want peer-to-peer networking or a client-server arrangement?</li></ul><p>Peer networking is appealing - a server is still needed to match players together, but then something (e.g. WebRTC) handles communication between clients afterward. But it gets worse with increased players, and there&apos;s no central authority in cases of logging or abuse. </p><ul><li>If we use client-server, do we want to allow players to be hosts, or should we use a dedicated server? </li></ul><p>The default for Mirror and Fish is client-server, and out of the box it allows a player to be both the host and a client. There&apos;s also support for headless builds (<a href="https://forum.unity.com/threads/unity-2021-2-dedicated-server-target-and-stripping-optimizations-now-live-please-share-feedback.1143734/">Unity dedicated servers</a>), which will be helpful for persistence. </p><ul><li>With networked play, how should we approach performance recording?</li></ul><p>Initially, the project is set up to record one player&apos;s movement and voice to disk storage during a performance. This works fairly well on Quest 2, which is currently the most critical platform (I&apos;m not as worried about disk performance on desktop VR). But I don&apos;t know how well it will scale with multiple players at a time. Granted these recordings have stayed fairly small, and there&apos;s room for some optimization in audio compression and delta transform tracking. Additionally, the recording system for movement data depends on tagged components with unique ids - these ids will also need to be synced across the network before recording begins to avoid crossover issues. So player 1&apos;s set of IDs is the same on all ends of the network. </p><p>Those are all reasonable problems with solutions I can more or less imagine before committing to code. The trickier one is below:</p><h1 id="timing-time">Timing time</h1><ul><li>Given that the network introduces lag, and synchronization is important for performances, how do we rectify the lag with the performance? </li></ul><p>By default, each client (besides the host-client) has a bit of lag. Here&apos;s a scenario:</p><ul><li>Players P1 and P2, where P1 is the host,</li><li>P1&apos;s lag: 0 ms and P2&apos;s lag: 25 ms (a generous measure on a good day for Spectrum)</li><li>P1 picks a 2-player scene and clicks start. A 3-second countdown begins before the lines start playing.</li></ul><p>The &quot;Start scene&quot; action can be decorated as a remote procedure call so that all connected clients run the same code with the same data. But whether they should run the command at the same actual time is less clear. </p><ol><li>There&apos;s always a 3-second countdown before a scene begins. </li><li>P1 clicks start. </li><li>Each connected player knows their latency.</li></ol><p>What about offsetting a player&apos;s start time by their latency? After the start scene command is received, P2 could begin the scene after 2.975 seconds instead of 3. From P1&apos;s perspective, P2&apos;s lines and movement come in right on time. But for P2, P1&apos;s data would be twice as late, having started the scene early and still having to account for their lag. If P2 starts after 3 seconds and P1 starts after 2.975 seconds, then it&apos;s the opposite problem. If both players start the scene &quot;on time&quot;, then P1 will feel P2&apos;s 25 ms lag, but P2 would seemingly get P1&apos;s data right on time! </p><p>I&apos;m amazed whenever I play social VR games that handle this problem well, VRChat being at the top of the list. I need to revisit my <a href="https://gafferongames.com/post/what_every_programmer_needs_to_know_about_game_networking/">Gaffer on Games</a> for further reading. </p><h2 id="in-record-time">In Record Time</h2><p>We can record performance data, but how should that work in the networked version?</p><ol><li>Each client records their own performance and just that.</li><li>Each client records performances of all players.</li><li>The server records all performances.</li></ol><p>Option 1 is quite appealing - locally the data is all on-time, and consolidating the files after the scene ends should give a reproduction that is fairly faithful to the actual timing. You&apos;ll bear witness to lag still - P1 will physically react 25 ms too late to P2&apos;s gesture - but it&apos;s a close match to how things &quot;should be&quot; for a finished recording.</p><p>Option 2 can help reveal differences between experiences for debugging, if it&apos;s not too much strain on the disk I/O for the Quest 2. </p><p>Option 3 is a nice choice as well, if the server-that-is-also-a-client can keep up with the quests. But if #2 works, then this should too? It&apos;s also a good option for a dedicated server and keeping a log of player behavior. </p><p>Knowing me, I will most likely ignore the recording challenges until networking is working as I like it, then return to recording and break many things in the process &#x2013; the fixes of which will in turn break some things on the networking side, and so it oscillates until a steady state is reached. Ah, agile software development. It&apos;s too ambitious to expect all of these issues to be worked out before the semester starts again in a couple of weeks, but hopefully we&apos;ll have enough in-motion to build on and flesh out by then. </p><p></p>]]></content:encoded></item><item><title><![CDATA[Platform support]]></title><description><![CDATA[<p>Happy New Year! Over the last few days I&apos;ve been testing how this Unity scene runs on Meta Quest 2 vs Valve Index. This is only an initial, exploratory effort to minimize friction developing with support for multiple devices. I&apos;m mostly concerned with text readability because</p>]]></description><link>https://solaire.cs.csub.edu/aestus/platform-support/</link><guid isPermaLink="false">63b6ad3cc776a7053ba89351</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Thu, 05 Jan 2023 12:27:43 GMT</pubDate><content:encoded><![CDATA[<p>Happy New Year! Over the last few days I&apos;ve been testing how this Unity scene runs on Meta Quest 2 vs Valve Index. This is only an initial, exploratory effort to minimize friction developing with support for multiple devices. I&apos;m mostly concerned with text readability because of the whole <a href="https://solaire.cs.csub.edu/aestus/investigative-streaming-vrchat#theres-always-a-but">text karaoke aspect</a>. If you&apos;re using the Oculus-specific plugins and such in Unity, then you can use all sorts of nice features, including <a href="https://developer.oculus.com/documentation/unity/unity-ovroverlay/">compositor layers</a>, for better clarity on text. </p><p>However, if you move away from OpenXR and switch to developing with the Oculus or SteamVR plugins, everything gets a little more complicated. The whole point of using OpenXR is to avoid some of the pain points found with developing for multiple runtimes. Because of a push to get things ready for a demo at <a href="https://meaningfulplay.msu.edu/">Meaningful Play</a> (a post for another day), the entire focus had been getting this to run standalone on a Meta Quest 2. Now that the conference has passed, I&apos;ve wanted to spend some time on checking how it runs elsewhere. Not to mention, having the ability to rapidly test iterations without a device deployment is invaluable. So the motivation is there just to aid in debugging. </p><p>So with the Unity project configured to use OpenXR, I&apos;ve been able to run the scene under Play mode in the Unity Editor with both the Valve Index (via SteamVR on Windows, Linux TBD) as well as the Meta Quest 2 (via Oculus Link on Windows). The Unity project&apos;s Android configuration is also set to use OpenXR and utilize the OculusXR and Meta Quest Features. Deploying this and running it on a Quest 2 has still worked, which is very encouraging. </p><h2 id="woe-de-plate-forme">Woe de plate-forme<a href="#footnote">*</a></h2><p>That about summarizes the good parts of using OpenXR. Now here&apos;s some of the bad:</p><p>Good things come to those who wait, and <a href="https://forum.unity.com/threads/openxr-compositor-layers.1197517/">evidently compositor layer support is coming to OpenXR at some point</a>. Until then, I would need to switch to using OVR for the provider, which means giving up the use of OpenXR conveniences. The same goes for using anything specific to SteamVR. For instance, the HTC Vive Tracker 3.0 devices work well with the Valve Index base stations &#x2013; hello low-cost mocap! But these trackers aren&apos;t available by default in OpenXR. <a href="https://forum.unity.com/threads/openxr-and-openvr-together.1113136/#post-7803057">Valiant efforts</a> have been made to consolidate tracking features from OpenXR and the SteamVR plugin for Unity, but the unofficial solutions given in the thread did not work for me after a night spent trying, which is discouraging. I&apos;ve come across a <a href="https://steamcommunity.com/app/250820/discussions/8/2986411348896445645/">couple of other threads</a> of <a href="https://steamcommunity.com/app/250820/discussions/8/3187988018476738869/">people dealing with similar issues</a> and frustrations at the lack of support. So for the moment, further attempts to supplement lower body IK with additional trackers are on pause.</p><p>Speaking of lower body movement, <a href="https://developer.oculus.com/documentation/unity/move-overview/">Meta&apos;s Movement SDK</a> is another thing we currently miss out on by sticking with OpenXR. We gave this a shot soon after release, but came up short in terms of seeing anything usable from the Body Tracking SDK:</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Tried to try this in Unity - updated the Oculus Integration plugin and the Quest OS to 46. Confirmed that permissions for body tracking were approved in logcat. But got this error whenever we ran it on the Quest. Maybe we&apos;ll let it cook for a while longer and try again later. <a href="https://t.co/0Wva8RLQ5G">https://t.co/0Wva8RLQ5G</a> <a href="https://t.co/ZyRexB9Cyj">pic.twitter.com/ZyRexB9Cyj</a></p>&#x2014; vkNick (@njtoothman) <a href="https://twitter.com/njtoothman/status/1585379426375962624?ref_src=twsrc%5Etfw">October 26, 2022</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>We followed the release notes as best as we could, but no luck. After that, I figured it was worth waiting until there were more examples floating around, which thankfully <a href="https://github.com/oculus-samples/Unity-Movement">didn&apos;t take too long to appear</a>. Now that these are out, I&apos;ll give it another try soon to see how things stand. The real win would be getting reasonable lower body tracking on a Quest 2.</p><p>Of course, these things utilize the specific features from the Oculus Utilities plugin, which means supporting them comes at a project management cost. So here&apos;s the short list of the things that have been interesting and/but troublesome to support in the same context:</p><ul><li>Body and hand tracking from the Meta Movement SDK (Quest platforms)</li><li>Vive trackers (SteamVR and eventually OpenXR)</li><li>Compositor layers for text readability (Oculus and eventually OpenXR)</li></ul><h2 id="whats-next">What&apos;s next?</h2><p>I&apos;m aspiring to have this game supported across a range of devices. This won&apos;t solve the problems that came out of limited Kinect longevity, but it will mitigate it as much as possible and ideally make it easier to support newer, better devices as they emerge. OpenXR goes a long way in making this possible, but the tradeoff involves waiting for certain things. When it comes to delivering the best experience on each specific headset, the <a href="https://forum.unity.com/threads/how-to-detect-if-headset-is-available-and-initialize-xr-only-if-true.927134/#post-6088392">advice from Unity</a> is to handle it with separate prefabs and/or scenes that cater to that platform&apos;s needs. Which... works, but feels a little bad?</p><p>I&apos;m not very far off from following this pattern, but it doesn&apos;t solve everything. For instance, if I wanted to use Vive trackers and specify SteamVR/OpenVR as the XR plugin over OpenXR, that means changing a project setting, not a scene setting. To avoid issues when making a build for one platform vs. another, I might end up having the same repository cloned into adjacent folders with descriptive names - one for the Quest build, another for the SteamVR build, etc. This is more applicable when dealing with the choice of build platform, as the &quot;Switch platforms&quot; option in the Unity build system can take a staggering amount of time to complete due to asset reimports and such. </p><p>Ultimately, things are looking up for supporting multiple devices. Some of the really cool features may have to wait, but that also helps draw boundaries around core features that deserve attention first before extending further support. As fast as things move, there&apos;s always a risk of creating something that works out of madness, only for it to be made obsolete by official packages and specification updates. Good problems to have, but bad for burnout and sanity. </p><p>Still, who knows what the next semester will bring? A great deal of effort was made in the fall - general quality-of-life improvements across the board - and the wishlist is still quite long:</p><ul><li>live multiplayer support</li><li>downloadable builds for desktop (easy)</li><li>downloadable apk for Quest (easy to give, harder to install since it requires dev mode)</li><li>possible pursuit of <a href="https://developer.oculus.com/blog/how-to-prepare-for-a-successful-app-lab-submission/">AppLab support</a> to ease Quest 2 installation (requires a fairly involved review and approval process that may be complicated to satisfy given our features involve deliberately recording voice and movement data)</li><li>more avatars and play spaces</li><li>web-based performance viewer</li><li>possibly even a web-based version? Hello WebXR!</li><li>internal development for pedagogy and experiment purposes</li></ul><p>So, going into 2023, I think we&apos;ll have our hands full :)</p><!--kg-card-begin: markdown--><p><sub id="footnote"><a href="#woe-de-plate-forme">*</a>Platform woes, but spoken in the same manner as <em>eue de toilette</em></sub></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[VRChat and VR performances]]></title><description><![CDATA[<p>As you can probably tell from my <a href="https://solaire.cs.csub.edu/aestus/uis-etc/">last</a> <a href="https://solaire.cs.csub.edu/aestus/more-on-uis/">few</a> <a href="https://solaire.cs.csub.edu/aestus/uis-for-multiple-platforms/">posts</a>, I&apos;ve been digging into Unity code to handle UI accessibility between desktop and XR platforms. Although there have been some great strides forward, there&apos;s nothing like forward progress to remind you of how much remains ahead.</p>]]></description><link>https://solaire.cs.csub.edu/aestus/investigative-streaming-vrchat/</link><guid isPermaLink="false">62761f94bfb6f90558f3eeda</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Sat, 07 May 2022 09:20:42 GMT</pubDate><content:encoded><![CDATA[<p>As you can probably tell from my <a href="https://solaire.cs.csub.edu/aestus/uis-etc/">last</a> <a href="https://solaire.cs.csub.edu/aestus/more-on-uis/">few</a> <a href="https://solaire.cs.csub.edu/aestus/uis-for-multiple-platforms/">posts</a>, I&apos;ve been digging into Unity code to handle UI accessibility between desktop and XR platforms. Although there have been some great strides forward, there&apos;s nothing like forward progress to remind you of how much remains ahead. With that in mind, there&apos;s something that&apos;s been on my mind for a while and a tweet I recently saw really captured it:</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">The lack of content for VR gaming outside of VRChat is staggering. If anything you gotta give VRChat a big appreciation to making it easy for making VR games on their platform. Honestly a lot of the game worlds could be their own independent game.</p>&#x2014; Lucifer MStar (@LuciferMStarVRC) <a href="https://twitter.com/LuciferMStarVRC/status/1522565868886106114?ref_src=twsrc%5Etfw">May 6, 2022</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>That thread is worth a look-through. I&apos;ve always been a sucker for game games. You know what I mean: games that are really a platform for players and creators to make insane stuff: LittleBigPlanet, Second Life, Roblox, Garry&apos;s Mod, etc.. <a href="https://hello.vrchat.com/">VRChat</a> is no exception. </p><p>A few years ago, late one night, I hosted a public room called &quot;late nite computering&quot; in <a href="https://www.bigscreenvr.com/">Bigscreen Beta</a>. A friend and I were just screen sharing dumb gifs and webms when a guy joined and chatted about his work in VR. That guy turned out to be <a href="https://twitter.com/vrdesignguy">Ron Millar</a>, whose name I recognized from my old Blizzard game manuals. I gushed about reading the WarCraft II manuals over and over and poring over the art work, and got to hear anecdotes about his time there and Chris Metzen, so that was a childhood meet-your-hero moment come-to-life that I&apos;m still not completely over. </p><p>Gushing aside, he told us about the work he had been doing on VRChat, and talked about how exciting the platform development was at that time (and still is). We spoke about social VR experiences we had thus far: Bigscreen was and truly is a comfortable feeling. It can feel immensely cozy having spatialized audio with head and hand gestures from the folks sharing a space with you. IMO, Bigscreen&apos;s advantage was having fixed seating with teleportation, so the distance between players is a feature. Ron talked about about what VRChat had to offer: all of the above, plus highly-customized avatars, player-created games and content, live events, concerts, and more. </p><h2 id="temptations">Temptations</h2><p>As a platform, VRChat solves quite a lot of problems that I&apos;m anticipating with my own VR project:</p><ul><li>Network connectivity.</li><li>So many avatar choices.</li><li>Interactive props!</li><li>Cross-platform support: desktop mode (KBM and gamepad), PC VR, Quest, and more. The more platform support, the better.</li><li>Voice chat between players. With spatialized audio! </li><li>Hosted rooms, sharing contexts and instances.</li><li>Social features: friends, invites, avatar/voice muting.</li><li>Content creation and sharing.</li><li>Plugin support for lots of nice Unity things: IK solvers, rich text, video playback (even with subtitle files!)</li><li><a href="https://docs.vrchat.com/docs">Honestly decent documentation</a>!</li></ul><p>So rather than try to reinvent enough wheels to build a semitruck, it&apos;s very tempting to consider VRChat as the platform for shared virtual performances. A self-contained solo-dev project simply won&apos;t be able to beat it in terms of audience, never mind the features listed above. </p><h2 id="theres-always-a-but">There&apos;s always a but...</h2><p>However, there are always terms and conditions. For starters, the use of Unity components in building a VRChat world is <a href="https://docs.vrchat.com/docs/whitelisted-world-components">restricted to a whitelist</a>. Makes sense - arbitrary component support from Unity would probably be a nightmare. So, the nice karaoke text player component I made recently? The one that can make super smooth text like this? </p><figure class="kg-card kg-video-card kg-card-hascaption"><div class="kg-video-container"><video src="https://solaire.cs.csub.edu/aestus/content/media/2022/05/smooth3.mp4" poster="https://img.spacergif.org/v1/2560x1440/0a/spacer.png" width="2560" height="1440" loop autoplay muted playsinline preload="metadata" style="background: transparent url(&apos;https://solaire.cs.csub.edu/aestus/content/images/2022/05/media-thumbnail-ember249.jpg&apos;) 50% 50% / cover no-repeat;"></video><div class="kg-video-overlay"><button class="kg-video-large-play-icon"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/></svg></button></div><div class="kg-video-player-container kg-video-hide"><div class="kg-video-player"><button class="kg-video-play-icon"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/></svg></button><button class="kg-video-pause-icon kg-video-hide"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"/><rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"/></svg></button><span class="kg-video-current-time">0:00</span><div class="kg-video-time">/<span class="kg-video-duration"></span></div><input type="range" class="kg-video-seek-slider" max="100" value="0"><button class="kg-video-playback-rate">1&#xD7;</button><button class="kg-video-unmute-icon"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"/></svg></button><button class="kg-video-mute-icon kg-video-hide"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"/></svg></button><input type="range" class="kg-video-volume-slider" max="100" value="100"></div></div></div><figcaption>Top: karaoke text component with smooth letter fill. Bottom: karaoke text component with discrete letter fill.</figcaption></figure><p>That&apos;s probably not an option. Which is sad, because it&apos;s made using the TextMeshPro component, which <em>is</em> supported, but it&apos;s also using its own shader and script to control the shader&apos;s uniforms, which doesn&apos;t look like it&apos;s supported. I might be wrong, but that seems to be the case. </p><p>There&apos;s also the future plans for this project. I&apos;ve been working on making performance recordings, which essentially contain layered movement and audio data from each player (or from the same player multiple times) synchronized for playback. I&apos;ve done this before with Kinect data, but our synchronization setup was painful and prone to drift - even for a single performance&apos;s video and audio! </p><p>For the movement data in VR, this would actually be fewer points than needed to record Kinect data (20 or 25 joint positions and rotations per player) since it primarily comes down to the avatar&apos;s root transform, plus the head and hand transforms. Saving these values and replaying them on a rig set up with <a href="http://root-motion.com/">FinalIK</a> is &#x2013; OK, stuff like that is always a bit of a pain, especially when it comes to setting up a generalizable prefab that can basically be spawned in and told to replay the movement &#x2013; within reach. Even for desktop players, having the player&apos;s root position and rotation saved will allow the IK system to approximate the other bits: locomotion, orientation, head gaze. Granted the VR players will steal the show with more natural-looking movement, it&apos;s still exciting to think about! And it just doesn&apos;t seem possible with VRChat.</p><h2 id="capturing-captivating-performances">Capturing captivating performances</h2><p>Like FinalIK, AVPro is another external tool supported by VRChat. In their capacity, I believe it&apos;s mainly used for audio/video playback. But it also has components for capturing audio and video. I&apos;ve used these before to record the screen performance for <em><a href="https://www.playtheknave.org/">Play the Knave</a></em>. From the look of it, AV <em>capture</em> isn&apos;t on the supported list for VRChat. So if players want to record a performance in-game, they have to do it themselves. Granted OBS, Radeon Record, and ShadowPlay have all been excellent when I&apos;ve used them. But there&apos;s a few reasons why we want to go with a game-based replay file (i.e., record the players&apos; movements and voice), as opposed to a screen-based capture approach:</p><ol><li>Screen recordings from first-person VR perspectives are not especially compelling. It feels great to play a game in VR, but it feels less than great to watch someone else&apos;s first-person VR experience, especially as a recording. This is why <a href="https://www.twitch.tv/naysy">some VR streamers</a> make the extra effort to setup external cameras, use depth sensing or green screens to filter out the background, and synchronize an in-game camera&apos;s position with the external camera to make more compelling screen recordings. Beat Saber is great for this, and I believe <a href="https://store.steampowered.com/app/465240/Serious_Sam_VR_The_Last_Hope/">Serious Sam VR: The Last Hope</a> supports this as well, to name a couple. </li><li>To develop the previous point: a screen recording&apos;s perspective is quite limited. Although AI+ML can do insanely cool things with video, it sure would be nice to have a customizable camera position for a scene replay. </li><li>To develop the point even further: a screen recording&apos;s audio is also finite. Yes, the game may have spatialized audio &#x2013; players at your side <em>sound</em> like they&apos;re at your side &#x2013; but if we want to move the camera around for a replay, we have to make the audio respond in turn.</li><li>Here&apos;s a new point: file size! A screen recording produces a video, which is subject to encoding choices (resolution, audio/video bitrate, framerate, etc.) for optimal file size. We can&apos;t get around paying the cost of recording audio, but certainly a performance&apos;s <em>lossless<sup><a href="#lossless">[1]</a></sup></em> movement data will be considerably smaller than a high-resolution screen recording. Remember: we just have to save a few positions and rotations per frame per player, rather than <em>w</em> &#x2715; <em>h</em> pixels per frame. </li><li>Another benefit based on file size: performance! Much kinder to the player&apos;s machine (or how about the server instead?) to task them with recording the movement data rather than record and encode a video. </li></ol><p>Points 5 and 6 actually lend themselves well to the idea of running a server for multiplayer. You&apos;ve gotta send movement and voice data between players anyway, so why not have the server just write that down as it happens? Ethics of data collection aside for a moment, this practice can be in the best interest of social VR server admins. You can review these performances to verify reports of abusive players and hold them accountable. I&apos;m trying and failing to find a rant about this issue from someone I follow (I wanna say <a href="http://doc-ok.org/">Doc Ok</a>?). But the issue is an old one that repeatedly appears across multiple platforms:</p><ul><li><a href="https://www.theverge.com/2018/10/24/18019376/oculus-go-samsung-gear-vr-new-record-report-abuse-feature">https://www.theverge.com/2018/10/24/18019376/oculus-go-samsung-gear-vr-new-record-report-abuse-feature</a></li><li><a href="https://www.technologyreview.com/2021/12/16/1042516/the-metaverse-has-a-groping-problem/">https://www.technologyreview.com/2021/12/16/1042516/the-metaverse-has-a-groping-problem/</a></li></ul><p>I&apos;ll acknowledge that moderation is a tricky thing. But in the shared spaces that VR can provide, it&apos;s probably in your best interest to assume that everything you say and do can be subject to playback at a later time. As a kid, the advice frequently given in school was along the lines of &quot;don&apos;t write anything you wouldn&apos;t want your mom to see on the front page of tomorrow&apos;s newspaper.&quot; So that&apos;s how old I am. Incidentally, VRChat does have a <a href="https://help.vrchat.com/hc/en-us/articles/360062658553-I-want-to-report-someone">reporting process</a> in place and they highly recommend including - you guessed it! - video recordings of the behavior being reported. </p><p>This post is getting longer than I expected, and I have more to say on this part in particular, so I&apos;ll wrap this up and shuffle the follow-up on game recordings to a new post.</p><h2 id="in-short">In short</h2><p>Overall, I&apos;m keeping my eye on VRChat as a choice of platform when it comes to &lt;secret VR collaborative performance project&gt;. The component restrictions most likely make it a non-option for my specific needs, but it&apos;s hard to argue with a successful platform and its community. Still, it&apos;s nice to see so much that <em>does </em>work in one system. Inspiration, at the very least.</p><!--kg-card-begin: html--><div id="lossless" style="font-size: 0.8em">
    <sup>[1]</sup> Lossless movement data insofar as the hardware&apos;s tracking frequency dictates. 
</div><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[The UI struggle continues, plus XR!]]></title><description><![CDATA[<p>I feel like I&apos;m hitting every shared experience working with the virtual mouse input component. This is the most recent one about the <a href="https://forum.unity.com/threads/changing-virtual-mouse-input-position-from-script.1276514/">cursor position jumping back to the last virtual position when going back and forth between the actual mouse and the virtual mouse</a>. Same deal but</p>]]></description><link>https://solaire.cs.csub.edu/aestus/uis-etc/</link><guid isPermaLink="false">6274aeebbfb6f90558f3ee92</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Fri, 06 May 2022 09:28:15 GMT</pubDate><content:encoded><![CDATA[<p>I feel like I&apos;m hitting every shared experience working with the virtual mouse input component. This is the most recent one about the <a href="https://forum.unity.com/threads/changing-virtual-mouse-input-position-from-script.1276514/">cursor position jumping back to the last virtual position when going back and forth between the actual mouse and the virtual mouse</a>. Same deal but <a href="https://forum.unity.com/threads/virtual-mouse-input.838786/">this time with clicks</a>. </p><p>By manually turning off the Virtual Mouse component when I&apos;m using the system mouse, the UI interactions work as intended. Happy about that, not so happy about the two of them not being able to stay synced. I attempted to use the scripts referenced above, but had no luck. I actually ended up disabling the GameObject with the Virtual Mouse Input component entirely. Instead, I&apos;m using the code I had before to warp the mouse position. </p><p>OK, WEIRD. I don&apos;t know what I changed or how it started working, but directional navigation in the UIs (at least for scene selection) are now working. The Cursor + Virtual Mouse stuff is all disabled, and it only starts working once I use the mouse to click on a scene. But once it has focus, it&apos;s just... working?! And the same thing does <em>not</em> work for the main controls to begin a scene/calibrate, etc. Possibly because the scene selection canvas has a grid layout group component, so it&apos;s got built-in support for that. </p><p>Things like this build up over time, and before you know it, you&apos;re &gt; 50% convinced that making your own engine would just be better, and somehow faster. </p><p>Whelp, I got frustrated surveying the landscape of folks with similar issues, so I stepped away from the KBM/gamepad inputs and worked on getting the Index to be recognized. Involved the following:</p><ul><li>Switching SteamVR to be the default system provider for OpenXR</li><li>Adding Valve Index controller schemes in Unity XR Plugin Management</li><li>Rebinding the input actions for locomotion, pointing, etc.</li></ul><p>I also wanted a way to delay the launch of VR mode, so that&apos;s being done with <a href="https://docs.unity3d.com/Packages/com.unity.xr.management@4.2/api/UnityEngine.XR.Management.XRManagerSettings.html">InitializeLoader</a> and StartSubsystems calls. In doing so, I found a very nice way to make a coroutine wait for another coroutine to complete: simply call:</p><pre><code class="language-C#">yield return StartCoroutine(OtherCoroutine());</code></pre><p>Overall this worked, but the player height became drastically high in doing so. This wasn&apos;t the case when the XR provider is configured to initialize at startup for your platform. You&apos;d think the fix would involve having manual checks for player height vs. the floor, etc., but my gut told me to just disable the XROrigin component by default and enable it after the XR subsystems are started. Voila! As far as I can tell, doing so didn&apos;t cause any issues in desktop mode, which is nice.</p>]]></content:encoded></item><item><title><![CDATA[More on UIs, new devices.]]></title><description><![CDATA[<p>To continue from the <a href="https://solaire.cs.csub.edu/aestus/uis-for-multiple-platforms/">last post</a>, there&apos;s a <a href="https://docs.unity3d.com/Packages/com.unity.inputsystem@1.1/api/UnityEngine.InputSystem.UI.VirtualMouseInput.html">VirtualMouseInput</a> component that seems to be in the right direction. And just when I was polishing up the use of the hand-made version... This means more reconfiguring, but again, a little work now to save a lot in the future.</p>]]></description><link>https://solaire.cs.csub.edu/aestus/more-on-uis/</link><guid isPermaLink="false">627248d4bfb6f90558f3ee5d</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Wed, 04 May 2022 09:55:05 GMT</pubDate><content:encoded><![CDATA[<p>To continue from the <a href="https://solaire.cs.csub.edu/aestus/uis-for-multiple-platforms/">last post</a>, there&apos;s a <a href="https://docs.unity3d.com/Packages/com.unity.inputsystem@1.1/api/UnityEngine.InputSystem.UI.VirtualMouseInput.html">VirtualMouseInput</a> component that seems to be in the right direction. And just when I was polishing up the use of the hand-made version... This means more reconfiguring, but again, a little work now to save a lot in the future. *<em>knocks on wood*</em></p><p>Unity does give a warning about the PlayerInput not matching the actions that the UI Input Module supports, and even offers a button to &quot;fix&quot; it. I&apos;ve yet to press that button but I think it boils down to providing action event handlers for the mousey things: pointer move, left button, scroll, etc. It may not be that, though. I hope it&apos;s not, anyway. </p><p>Not much more to say than that for now. The semester&apos;s end is looming and I need to catch up on grading before spending too much more time on this. On the bright side, the Valve Index arrive today :) </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://solaire.cs.csub.edu/aestus/content/images/2022/05/image.png" class="kg-image" alt="Valve Index fresh out the box" loading="lazy" width="1483" height="887" srcset="https://solaire.cs.csub.edu/aestus/content/images/size/w600/2022/05/image.png 600w, https://solaire.cs.csub.edu/aestus/content/images/size/w1000/2022/05/image.png 1000w, https://solaire.cs.csub.edu/aestus/content/images/2022/05/image.png 1483w" sizes="(min-width: 720px) 720px"><figcaption>Hello worlds!</figcaption></figure><p>Update: the VirtualMouseInput component is doing pretty well. Not quite at perfect harmony between actual mouse and gamepad-driven virtual mouse, but it&apos;s getting there. It appears to be a bigger hassle to use both 1) a software cursor and 2) a canvas in world space. Evidently there will be a <a href="https://forum.unity.com/threads/how-to-use-virtual-mouse-for-world-canvas-interaction-when-cursor-is-in-locked-state.1260563/">fix for this in 1.4</a>, but the latest is 1.3. I&apos;m honestly a little relieved to find other folks with a similar issue. </p><p>I gave a passing try and running my scene with OpenXR configured as the XR plugin, but it didn&apos;t make it very far. Even with OpenXR as the configuration before, the Oculus provider seemed to just work. The best I got today was the right hand controller moving and aiming the camera?? So, try another night.</p>]]></content:encoded></item><item><title><![CDATA[UIs for multiple platforms]]></title><description><![CDATA[<p>I&apos;ve been working on a menu system in Unity to be used in Windows Standalone mode (with keyboard/mouse and gamepad support) and <a href="https://docs.unity3d.com/Packages/com.unity.xr.openxr@0.1/manual/index.html">OpenXR</a>. In this stage, the UI canvases are set to world-space configuration, which, from what I&apos;ve typically seen, is bad practice. Or at</p>]]></description><link>https://solaire.cs.csub.edu/aestus/uis-for-multiple-platforms/</link><guid isPermaLink="false">6270c8a6bfb6f90558f3ed17</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Tue, 03 May 2022 10:34:31 GMT</pubDate><content:encoded><![CDATA[<p>I&apos;ve been working on a menu system in Unity to be used in Windows Standalone mode (with keyboard/mouse and gamepad support) and <a href="https://docs.unity3d.com/Packages/com.unity.xr.openxr@0.1/manual/index.html">OpenXR</a>. In this stage, the UI canvases are set to world-space configuration, which, from what I&apos;ve typically seen, is bad practice. Or at least it can be if the experience is made to be poor by:</p><ul><li>requiring the player to move/look to view the entire menu,</li><li>managing looking vs. selecting state,</li><li>forcing raycast/&quot;laser pointer&quot;-only selections via mouse/hand pointer,</li><li>moving the mouse cursor with a gamepad (shudder)</li></ul><p>And yet, I&apos;m doing all of these &#x2013; for now. Part of the reason is due to using Unity&apos;s &quot;new&quot; <a href="https://docs.unity3d.com/Packages/com.unity.inputsystem@1.3/manual/QuickStartGuide.html">Input System</a> and their <a href="https://docs.unity3d.com/Packages/com.unity.xr.interaction.toolkit@2.0/manual/index.html">XR Interaction Toolkit</a>. The purpose of such choices is to save time and effort in the long run. For instance, the same &quot;move&quot; action and player logic can be bound to multiple inputs simultaneously, such as: WSAD on a keyboard, the left analog stick on a gamepad, and a left-hand XR controller&apos;s &#xA0;analog stick. Some of the work is deferred to configuration files outside of script, but the jist isn&apos;t so bad to work with:</p><ol><li>Give an action a name</li><li>Bind it to input(s) and configure</li><li>Define an &quot;On(ActionName)&quot; method in your player class</li></ol><p>Still, tedium creeps in. You know how games will often seamlessly switch between input prompts? &quot;Press enter to start&quot; becomes &quot;Press &#x24B6; to start&quot; as soon as you touch a controller&apos;s analog stick. Suddenly, the benefits of handling input actions like an interface is restrictive &#x2013; you aren&apos;t able to see which device the input came from in the handler, so you aren&apos;t sure if and when to switch displays. Additionally, allowing users to rebind becomes quite involved, too. The InputSystem allegedly has some things <a href="https://docs.unity3d.com/Packages/com.unity.inputsystem@1.0/api/UnityEngine.InputSystem.InputActionRebindingExtensions.RebindingOperation.html">built-in to help</a>, but nothing is free. In turn, there&apos;s a plugin for the InputSystem called <a href="https://docs.unity3d.com/Packages/com.unity.inputsystem@0.2/api/UnityEngine.InputSystem.Plugins.PlayerInput.PlayerInput.html">PlayerInput</a> that&apos;s supposed to help with both of these issues, and so the perpetual struggle to work with Unity input and not against it continues. </p><p>With all that in mind, I&apos;m happy I can test out KBM and gamepad interactions in the same scene. Longer-term goals for the UI options include having directional navigation in the menus for all devices - arrows on keyboard, sticks/dpads on controllers - to avoid the need for raycasting. That could be a single raycast to select the menu using any collision on its canvas, then transition to an interaction state so the directional controls navigate between menu options. And which is more evil?</p><!--kg-card-begin: html--><div id="poll_for_form_1"></div>
<script>
    let args2 = new FormData();
    args2.append("GetPoll", true);
    args2.append("id", 1);
    fetch("https://solaire.cs.csub.edu/misc/poll_display.php", {
        method: 'POST',
        body: args2,
    })
    .then(response => {
        console.log(response);
        return response.json()
    })
    .then(data => {
        let news = document.createElement("script");
        news.text = data.script;
        let formS = document.getElementById("poll_for_form_1");
        formS.parentNode.insertBefore(news, formS);
        formS.innerHTML = data.html;
        console.log(data);
    })
    .catch((error) => {
        console.error(error);
    });
</script><!--kg-card-end: html--><p>Let&apos;s see how well that test poll does... </p><p>Side note: I took a <em>major</em> detour in writing this post that went something like this:</p><ol><li>Can Ghost embed polls? Well, it can embed <a href="https://ghost.org/integrations/?tag=surveys+%26+forms">Google Forms...</a></li><li>Can Ghost embed JS and HTML? Yes! Now we&apos;re talking...</li><li>Can I whip together a poll database? Yes! (You see where this is going)</li><li>Can I write some PHP to interface with the database and invoke with with cURL?</li><li>How about with fetch from JS?</li><li>Can I generate the form in HTML and the submit event handler in JS from the PHP?</li><li>Can I add the returned JS to the page so it can be executed on the form submission?</li><li>Can I show results for a poll after it&apos;s been submitted?</li><li>Can I allow users to change their vote?</li></ol><p>Conclusion: cascading &quot;Yes!&quot;es are dangerous. </p>]]></content:encoded></item><item><title><![CDATA[Demos]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>LBS (left) vs spring-based skinning (right)<br>
<video width="800" height="450" controls><br>
<source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/lbs_rom2.mp4"><br>
Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2019/07/lbs_rom2.mp4">download link</a> instead.<br>
</video></p>
<p>With normals visible<br>
<video width="800" height="450" controls><br>
<source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/lbs_rom2_normal.mp4"><br>
Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;</video></p>]]></description><link>https://solaire.cs.csub.edu/aestus/demos/</link><guid isPermaLink="false">6205b93728f3207e6a527070</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Fri, 05 Jul 2019 10:48:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>LBS (left) vs spring-based skinning (right)<br>
<video width="800" height="450" controls><br>
<source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/lbs_rom2.mp4"><br>
Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2019/07/lbs_rom2.mp4">download link</a> instead.<br>
</video></p>
<p>With normals visible<br>
<video width="800" height="450" controls><br>
<source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/lbs_rom2_normal.mp4"><br>
Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2019/07/lbs_rom2_normal.mp4">download link</a> instead.<br>
</video></p>
<h1 id="armadillo">Armadillo</h1>
<h2 id="springforceswithlbs">Spring forces with LBS</h2>
<video width="800" height="450" controls>
    <source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/armadillo_lbs_forces.mp4">
    Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2019/07/armadillo_lbs_forces.mp4">download link</a> instead.
</video>
<h2 id="springforceswithrigidskinning">Spring forces with rigid skinning</h2>
<video width="800" height="450" controls>
    <source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/armadillo_rigid_forces.mp4">
    Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2019/07/armadillo_rigid_forces.mp4">download link</a> instead.
</video>
<h2 id="splitswireframe">Splits wireframe</h2>
<p>Left: all forces but length. Loss of detail around abdominal muscles<br>
Right: all forces. Better preservation, minor loss<br>
<video width="800" height="450" controls><br>
<source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/armadillo_lbs_splits_sidebyside.mp4"><br>
Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2019/07/armadillo_lbs_splits_sidebyside.mp4">download link</a> instead.<br>
</video></p>
<p>Same, but with normal map<br>
<video width="800" height="450" controls><br>
<source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/armadillo_lbs_splits_sidebyside_normal.mp4"><br>
Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2019/07/armadillo_lbs_splits_sidebyside_normal.mp4">download link</a> instead.<br>
</video></p>
<h2 id="lbsrom">LBS ROM</h2>
<p>A little difficult to compare due to video size. Will crop these so more of the mesh is visible on each side without so much filler space in between.<br>
Left: without forces<br>
Right: with forces. A couple of frame-to-frame artifacts (slight bulging on the arms), but also better volume preservation in the bends.<br>
<video width="800" height="450" controls><br>
<source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/armadillo_rom_sidebyside.mp4"><br>
Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2019/07/armadillo_rom_sidebyside.mp4">download link</a> instead.<br>
</video></p>
<h2 id="forcecombinations">Force combinations</h2>
<p>Length force only. In this LBS pose, the mesh has 14% less volume than when it&apos;s in bind. After the length forces are applied, the mesh has 3.74% less volume than bind.<br>
<video width="800" height="450" controls><br>
<source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/lengthforce_restore.mp4"><br>
Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2019/07/lengthforce_restore.mp4">download link</a> instead.<br>
</video></p>
<p>Length and surface forces. Almost identical to using length-only forces<br>
<video width="800" height="450" controls><br>
<source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/length_surface_force_restore.mp4"><br>
Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2019/07/length_surface_force_restore.mp4">download link</a> instead.<br>
</video></p>
<h3 id="lengthsurfaceattachmenttorqueandbonetorque">Length, surface, attachment torque, and bone torque</h3>
<p>Using position-based updater - no acceleration, only velocity.<br>
<video width="800" height="450" controls><br>
<source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/all_force_restore.mp4"><br>
Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2019/07/all_force_restore.mp4">download link</a> instead.<br>
</video></p>
<p>Using Physics-based updater - acceleration, velocity<br>
<video width="800" height="450" controls><br>
<source src="https://solaire.cs.csub.edu/aestus/content/images/2019/07/all_force_restore_wilhelm.mp4"><br>
Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2019/07/all_force_restore_wilhelm.mp4">download link</a> instead.<br>
</video></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Bone torque and flex weights]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Today I added some controls to my spring force solver. My goal is to have the spring solver &quot;always-on&quot; (updating the mesh once per frame based on the spring forces), but whenever I&apos;d try letting the solver run continuously, the spring forces would eventually skew too</p>]]></description><link>https://solaire.cs.csub.edu/aestus/bone-torque-and-flex-weights/</link><guid isPermaLink="false">6205b93728f3207e6a52706f</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Wed, 15 May 2019 23:59:05 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Today I added some controls to my spring force solver. My goal is to have the spring solver &quot;always-on&quot; (updating the mesh once per frame based on the spring forces), but whenever I&apos;d try letting the solver run continuously, the spring forces would eventually skew too much to be considered usable or stable. So a mesh skinned with LBS may look like this:</p>
<p><img src="https://solaire.cs.csub.edu/aestus/content/images/2019/05/flatbox-full-lbs.png" alt="flatbox-full-lbs" loading="lazy"></p>
<p>And using the results from LBS as the initial state, this is how it looks after a few rounds of the solver:</p>
<p><img src="https://solaire.cs.csub.edu/aestus/content/images/2019/05/flatbox-full-lbs-springs-no-converge.png" alt="flatbox-full-lbs-springs-no-converge" loading="lazy"></p>
<p>The middle regions look OKish, but vertices near the root and the leaf have distorted considerably. Eventually the spring forces stop altering the mesh and it looks like this:</p>
<p><img src="https://solaire.cs.csub.edu/aestus/content/images/2019/05/flatbox-full-lbs-springs-no-bone-torque.png" alt="flatbox-full-lbs-springs-no-bone-torque" loading="lazy"></p>
<p>So it definitely needs some help. To help stabilize the solver, I considered 2 choices:</p>
<ol>
<li>Modify the solver so vertices along the bone can have varying flexibility. For example, vertices near the center of the bone should essentially retain their shape from LBS, while vertices near joints should be allowed to flex as much as possible.</li>
<li>Try to add an additional force that counteracts the tendency for the mesh to skew.</li>
</ol>
<h1 id="flexweights">Flex weights</h1>
<p>I went with the first approach and implemented a few simple functions as flex weights. Each bone gets essentially a 1D, 1-channel float texture that the solver can use to accordingly weigh the computed force vectors. If the flex weight for certain vectors is 0, then this effectively adds static constraints to the solver and should prevent the skewing collapse. Here&apos;s what some of the flex functions look like:</p>
<p>Flex shape: U<br>
Function: <code>(float t) { return glm::pow(glm::abs&lt;float&gt;(t - 0.5f) * 2.0f, 2.0f); }</code><br>
<img src="https://solaire.cs.csub.edu/aestus/content/images/2019/05/flatbox-full-lbs-springs-no-bone-torque-all-flex-u.png" alt="flatbox-full-lbs-springs-no-bone-torque-all-flex-u" loading="lazy"></p>
<p>This represents a smooth change of flexibility across the bone, allowing the ends to vary but keeping the center rigid. &quot;U&quot; can see the effect this has on the mesh - since LBS introduces scale vector length changes, this makes the inner bends appear pinched as vertices in between the rigid regions deform to reduce the spring system&apos;s energy.</p>
<p>Flex shape: /<br>
Function: <code>(float t) { return t; }</code><br>
<img src="https://solaire.cs.csub.edu/aestus/content/images/2019/05/flatbox-full-lbs-springs-no-bone-torque-all-flex-linear.png" alt="flatbox-full-lbs-springs-no-bone-torque-all-flex-linear" loading="lazy"></p>
<p>The linear flex function makes vertices at the start of a bone rigid, and more flexible as the mesh deforms. This looks a little more acceptable than the U function, but it makes for some strange deformations on the leaf joint&apos;s vertices.</p>
<h1 id="bonetorque">Bone torque</h1>
<p>With this approach, I&apos;m setting the variable flexibility concept aside for a moment and focusing on what causes the skew during long runs. The spring force solver works by computing spring forces between adjacent vertices. One of those spring forces is a torsional force between adjacent scale vectors. So if the solver is left to run on its own, it will continue trying to minimize the scale vectors angles - there is no incentive to stop running if the angles can still be minimzed. This is how the skewing result happens.</p>
<p>I was counting on convergence checks to prevent this from happening, but that&apos;s less reliable for an always-on solver. Instead, I added a new torsion force. This one measures the angle between a scale vector and its connecting bone at bind, then when the solver is running, it computes a force to restore this angle.</p>
<p>Here&apos;s a look at the mesh with bone torque and stiffness coefficient=1<br>
<img src="https://solaire.cs.csub.edu/aestus/content/images/2019/05/flatbox-full-lbs-springs-with-bone-torque-1.png" alt="flatbox-full-lbs-springs-with-bone-torque-1" loading="lazy"></p>
<p>Not bad, maybe a little too rigid. The inner bends still look like they&apos;re overlapping, and there&apos;s some noticeable gaps on the outer bends. Here&apos;s the result using coefficient=0.1<br>
<img src="https://solaire.cs.csub.edu/aestus/content/images/2019/05/flatbox-full-lbs-springs-with-bone-torque-0.1.png" alt="flatbox-full-lbs-springs-with-bone-torque-0.1" loading="lazy"></p>
<p>The outer bends look a little more pointy, but the inner bends have converged to nice pinches. The leaf joint&apos;s vertices are staying put, too!</p>
<h1 id="combined">Combined</h1>
<p>Here are some figures with both flex weights and bone torque.</p>
<p>Bone coeff=0.1, flex U:<br>
<img src="https://solaire.cs.csub.edu/aestus/content/images/2019/05/flatbox-full-lbs-springs-with-bone-torque-0.1-flex-u-1.png" alt="flatbox-full-lbs-springs-with-bone-torque-0.1-flex-u-1" loading="lazy"><br>
The pinching artifact returns, but looks a little more consistent and stylesque.</p>
<p>Bone coeff=0.1, flex /:<br>
<img src="https://solaire.cs.csub.edu/aestus/content/images/2019/05/flatbox-full-lbs-springs-with-bone-torque-0.1-flex-linear.png" alt="flatbox-full-lbs-springs-with-bone-torque-0.1-flex-linear" loading="lazy"></p>
<p>Bone coeff=0.1, flex \:<br>
<img src="https://solaire.cs.csub.edu/aestus/content/images/2019/05/flatbox-full-lbs-springs-with-bone-torque-0.1-flex-linear-negative.png" alt="flatbox-full-lbs-springs-with-bone-torque-0.1-flex-linear-negative" loading="lazy"></p>
<p>Subtle differences in the squish direction between / and \...</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Spring forces and ARAP]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This is a post comparing two techniques for mesh deformation: spring forces, and ARAP. I&apos;ve written about as-rigid-as-possible deformation <a href="https://toothman.cs.ucdavis.edu/aestus/as-rigid-as-possible-arap-skinning/">before</a>, but I haven&apos;t written much about springs before. The spring forces solver run on an OpenGL Compute shader, while the ARAP solver runs on the CPU</p>]]></description><link>https://solaire.cs.csub.edu/aestus/spring-force-balancers/</link><guid isPermaLink="false">6205b93728f3207e6a52706c</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Wed, 18 Apr 2018 21:11:39 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This is a post comparing two techniques for mesh deformation: spring forces, and ARAP. I&apos;ve written about as-rigid-as-possible deformation <a href="https://toothman.cs.ucdavis.edu/aestus/as-rigid-as-possible-arap-skinning/">before</a>, but I haven&apos;t written much about springs before. The spring forces solver run on an OpenGL Compute shader, while the ARAP solver runs on the CPU using Eigen/LibIGL.</p>
<p>In this post on <a href="https://toothman.cs.ucdavis.edu/aestus/attachment-and-rigidity/">attachment and rigidity</a>, I brought up some artifacts that can occur when deforming a mesh using a skeleton. Here&apos;s an example:</p>
<video width="800" height="450" controls>
    <source src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/stretch.mp4">
    Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2018/02/stretch.mp4">download link</a> instead.
</video>
<p>For the record: The mesh I&apos;m using has 5634 vertices, 11264 faces, and 17 bones. The plus-sign shape makes it very useful for deformation studies.</p>
<p>Similar artifacts happen with LBS and DQS:</p>
<p><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/04/dqs_skel_artifact.PNG" alt="dqs_skel_artifact" loading="lazy"></p>
<p>The skinning method&apos;s rigidity is causing that drastic stretch artifact. In the last post, I also mentioned a few things I might try to remove the artifact. Let&apos;s start with ARAP. I&apos;m using <a href="http://igl.ethz.ch/projects/ARAP/index.php">LibIGL&apos;s implementation</a> with default parameters: Young&apos;s modulus is set to 1, and the maximum iteration count is 100.</p>
<video width="800" height="450" controls>
    <source src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/arap.mp4">
    Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2018/02/arap.mp4">download link</a> instead.
</video>
<p>ARAP has a preprocess phase for the selected boundary vertices (highlighted yellow). Any time the selected set changes, the preprocess is reprocessed. The solver runs separately. For the deformation above, compute time was ~400 ms, and solver time was 1100 ms.</p>
<p>Now let&apos;s look at the same kind of deformation, but used with spring forces. The techniques are rather different, but I&apos;ll use the same cap of 100 iterations. Note: ARAP runs and solves everything in one pass. The spring forces can run like this too, or I can set it to run one iteration per frame to show the change over time. I&apos;ll show both below.</p>
<video width="800" height="450" controls>
    <source src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/springforces.mp4">
    Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2018/02/springforces.mp4">download link</a> instead.
</video>
<p>There&apos;s no preprocess phase for the spring force solver. If I&apos;m allowing the solver to run until complete, then the total compute time is <strong>under 5 ms</strong>. When I restrict the solver to running once per frame, the total compute time is closer to 12 ms. The additional time comes from frame-to-frame overhead - mostly updating and binding shader storage buffers. Even so, it&apos;s 12 ms compared to ARAP&apos;s total of 1500 ms: around 125x speedup.</p>
<p>So, the performance improvement is nice, and the runtime makes it almost suitable for real-time use. However, the techniques are quite different. ARAP is an energy minimization problem solved with a sparse linear system. It depends on the mesh&apos;s adjacency matrix and edge lengths at bind.</p>
<p>I&apos;ve started by creating springs out of every mesh edge. Let&apos;s bust out <a href="https://en.wikipedia.org/wiki/Hooke%27s_law">Hooke&apos;s Law</a>:</p>
<p>$$<br>
\begin{equation}<br>
F = -kx<br>
\end{equation}<br>
\label{eq:hookeslaw}<br>
$$</p>
<p>where $F$ is the spring&apos;s force, $k$ is the spring constant that represents the spring&apos;s stiffness, and $x$ is the spring&apos;s displacement from rest position. The displacement&apos;s sign determines the force direction, and at 0 displacement, the spring exerts no force.</p>
<p>We can also compute the potential energy stored in a spring:</p>
<p>$$<br>
\begin{equation}<br>
U = \frac{1}{2}kx^2<br>
\end{equation}<br>
\label{eq:springpe}<br>
$$</p>
<p>How does this relate to our mesh? If the mesh is a set of vertices and edges $M=(V,E)$, then we can compute the energy of any edge given a deformation $M&apos;=(V&apos;,E)$:</p>
<p>$$<br>
\begin{equation}<br>
U(i,j) = \frac{1}{2}k (||V&apos;(i)-V&apos;(j)||-||(V(i)-V(j)||)^2<br>
\end{equation}<br>
\label{eq:edgepe}<br>
$$</p>
<p>And the total potential energy in the mesh deformation:</p>
<p>$$<br>
\begin{equation}<br>
E(M&apos;) = \sum_{i,j \in E} U(i,j)<br>
\end{equation}<br>
\label{eq:meshenergy}<br>
$$</p>
<p>Now things can get interesting. The edge&apos;s energy is a quadratic function of the displacement. The mesh&apos;s energy is a sum over the edge energies. A good way to reduce the mesh&apos;s energy is to redistribute large displacements across many adjacent edges. Because of that square power, it ends up being better to have many edges with a little displacement than to have only a few edges with large displacements.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Attachment and rigidity]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Glossary:</p>
<ul>
<li>ABCD: Attachment-based character deformation. Performs rigid skinning first, then deforms mesh vertices relative to a point of attachment on the skeleton.</li>
<li>ARAP: As-rigid-as-possible surface modeling technique. Uses ROIs to define mesh deformation area. Switches between optimizing vertex positions and rotations to minimize cell energy until convergence.</li>
<li>LSE: Laplacian surface</li></ul>]]></description><link>https://solaire.cs.csub.edu/aestus/attachment-and-rigidity/</link><guid isPermaLink="false">6205b93728f3207e6a52706a</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Fri, 23 Feb 2018 01:56:13 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Glossary:</p>
<ul>
<li>ABCD: Attachment-based character deformation. Performs rigid skinning first, then deforms mesh vertices relative to a point of attachment on the skeleton.</li>
<li>ARAP: As-rigid-as-possible surface modeling technique. Uses ROIs to define mesh deformation area. Switches between optimizing vertex positions and rotations to minimize cell energy until convergence.</li>
<li>LSE: Laplacian surface editing. Uses ROIs to define mesh deformation area. Attempts to minimize changes to differential geometry.</li>
<li>ROI: Region of interest on a mesh. Geometry is separated into solved (fixed in place) and unsolved (allowed to vary) groups. Solved groups include region boundaries to control the deformation size, and handles that can be deformed by the user to drive deformation. Unsolved groups include whatever geometry is contained within the boundaries.</li>
</ul>
<p>This post concerns the abilities and limitations of ABCD. Namely, the attachment system produces the most visually appealing results when mesh skeletons are chains: single parent, single child.</p>
<p><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/cylinderScaleDirections.png" alt="cylinderScaleDirections" loading="lazy"></p>
<p>That&apos;s basically the poster child mesh of ABCD. Uniform thickness, a chain skeleton, and reasonable bone lengths. If the mesh and skeleton are bent, the default attachment scheme (direct projection onto nearest bone) fails considerably:</p>
<p><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/bentcylinderattachment.png" alt="bentcylinderattachment" loading="lazy"></p>
<p>Why exactly is that a failure? Visually, it seems clear: the inner bend attachs to the skeleton strangely, leaving these large gaps in between adjacent attachment points. Intuitively, those attachment points should probably be close together. Why?</p>
<p>To deform a mesh vertex around a bending joint, ABCD has to find answers to a few questions:</p>
<ul>
<li>How close is the vertex to the bone? Is it within the bending joint&apos;s ROI?</li>
<li>Is the vertex on the inner side of the bend, or the outer side?</li>
<li>Is the bending joint on the parent side of the bone, or the child?</li>
<li>How much is that joint bending, and along what axis?</li>
</ul>
<p>Based on those answers, the vertex receieves a percentage of the joint&apos;s bend (including 0% if there&apos;s no influence). For this to smoothly deform the mesh, it requires some preservation of distances between attachment point, relative to the mesh. That is, if two mesh vertices are adjacent, then their attachment points should be, too. Or at least, they should have roughly the same distance between them. Watch what happens when ABCD skins the bent cylinder, both before and after smoothing the attachment points:</p>
<video width="800" height="450" controls>
    <source src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/outerbend.mp4">
    Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2018/02/outerbend.mp4">download link</a> instead.
</video>
<p>Pretty gross! Granted that the mesh doesn&apos;t seem well-suited for large outer bends, I was hopeful that ABCD would do a decent job uniformly straightening out the surface. For comparison, let&apos;s look at an inner bend:</p>
<video width="800" height="450" controls>
    <source src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/innerbend.mp4">
    Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2018/02/innerbend.mp4">download link</a> instead.
</video>
<p>Not a big surprise, but the mesh is better suited for inner bends. Still, it&apos;d be nice to have more consistent behavior around the outer bend.</p>
<p>ABCD accommodates changes in bone length fairly well, but it&apos;s still problematic. Because of the linear mapping between attachment points and bone length, large bone length changes introduce their own discontinuities. The vertices along the bone are consistently-spaced with each other, and so are the vertices outside the bone&apos;s influence. But the boundary between them is too strong, and results in poor spacing:</p>
<p><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/bone_squash.png" alt="bone_squash" loading="lazy"></p>
<p>The same artifact could occur with large bone twists. Ultimately, the issue is this: ABCD behaves as though every vertex is mapped to exactly one bone in the skeleton. After rigid skinning with this data, it does what it can to smooth out the regions in between rigidly-skinned meshes. The issue gets even worse with more attached skeletons (joints with multiple children). Let&apos;s start with the LBS version of a pose:</p>
<p><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/lbsbend.png" alt="lbsbend" loading="lazy"></p>
<p>Not pretty, but compared to this (ABCD, direct projection):</p>
<p><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/abcdbend.png" alt="abcdbend" loading="lazy"></p>
<p>There are two bad artifacts: one in the upper left, and one in the lower left. They are both the result of the single bone vertex mapping. One way to try and &quot;fix&quot; them would be to edit the skeleton: uniformly extend the bone lengths around the root so the joints extend past the saddle-shaped regions, making the central region more rigid in the process. Better results, but only because the mesh is more rigid. If we want that region to deform smoothly, then we need another approach. What happens if we just try to smooth the attachment points a bit?</p>
<p><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/plusbend.gif" alt="plusbend" loading="lazy"></p>
<p>That&apos;s actually quite a bit better! Attachment <em>quality</em> definitely makes a pretty big difference in the skinning results. How does it handle a few more bends? Here&apos;s a comparison between ABCD and LBS. The biggest issue seems to be the skin weight quality I got from Maya (geodesic voxel @ 512 resolution, classic linear weights. I thought that would be more than sufficient!).</p>
<video width="800" height="450" controls>
    <source src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/plusbend.mp4">
    Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2018/02/plusbend.mp4">download link</a> instead.
</video>
<p>And here&apos;s another clip with alternating up-down bends on the &quot;limbs&quot;:</p>
<video width="800" height="450" controls>
    <source src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/updownbend.mp4">
    Ah shoot, if you&apos;re reading this, your browser doesn&apos;t support video tags. Here&apos;s a <a href="https://solaire.cs.csub.edu/aestus/content/images/2018/02/updownbend.mp4">download link</a> instead.
</video>
<p>It&apos;s promising, but we still need to address the rigid boundaries between bone regions. The surface editing approach to shape control is to freely deform a small set of handle geometry (vertices, triangles), then optimize a linear least squares problem to minimize energy changes for each vertex. These energy values are defined using the vertex&apos;s neighbors, so mesh adjacency and spacing are critical factors. What if we could do the same for skinning?</p>
<ul>
<li>Some mesh vertices are more &quot;cleanly&quot; mapped to the skeleton than others. Think about the middle of an arm bone compared to the center of the chest.</li>
<li>Since we have a strong understanding of how those vertices deform, we should have some sense of how rigidly attached to the skeleton they are. Let&apos;s call this <strong>vertex rigidity</strong>.</li>
<li>Based on that rigidity and an initial rigid skinning phase, we should be able to inform a solver (similar to that used for surface editing) on how much &quot;wiggle room&quot; each vertex is granted.</li>
<li>With the right optimization, this should result in smoother deformations along bending joints (low rigidity) while preserving the desired shape for singular chains (high rigidity).</li>
</ul>
<p>In application, this will probably be a 2-stage pass. Stage 1 is ABCD skinning with the GPU, stage 2 is optimization using a solver. Hopefully, since most vertices will already be near their ideal position, the solve time will be short. This setup is similar in structure to the elastic implicit skinning work, which does rigid isosurface deformation, then vertex marching to restore offset, then interleaved tangential relaxation/ARAP passes to achieve global deformation.</p>
<p>Questions of which solver to use are harder to answer immediately. The first guesses would be:</p>
<ul>
<li>ARAP, but with using scale vectors (vertex position - attachment point) to determine rigidity rather than position alone.</li>
<li>LSE, but this would probably look bad with large rotations, as even the optimized LSE has limitations for large transformations.</li>
<li>Spring force solver. Possibly a more natural approach to a problem involving rigidity, and we can likely take advantage of the attachment and scale vectors as constraints to preserve thickness, etc. May have stability issues, but could also be very easy to implement and test. This may be where I go first with experiments, but first, more research!</li>
</ul>
<p>More to come later...</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[libigl]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/libigl.png" alt="libigl" loading="lazy"></p>
<p><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/igl.png" alt="Swiss Federal Institute of Technology in Zurich, Switzerland" loading="lazy"></p>
<p><a href="https://igl.ethz.ch/">Institute of Visual Computing of ETH Z&#xFC;rich</a><br>
Prof. Olga Sorkine-Hornung</p>
<h2 id="researchtopics">Research topics</h2>
<ul>
<li>interactive shape modeling and animation,</li>
<li>digital geometry processing,</li>
<li>digital fabrication,</li>
<li>image and video processing</li>
</ul>
<h2 id="publications">Publications</h2>
<p><a href="http://igl.ethz.ch/projects/Laplacian-mesh-processing/Laplacian-mesh-editing/index.php"><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/lse.png" alt="lse" loading="lazy"></a></p>
<p><a href="http://igl.ethz.ch/projects/Laplacian-mesh-processing/sketch-mesh-editing/index.php"><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/silsketch.png" alt="silsketch" loading="lazy"></a></p>
<p><a href="http://igl.ethz.ch/projects/ARAP/index.php"><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/arap.png" alt="arap" loading="lazy"></a></p>
<p><a href="http://igl.ethz.ch/projects/skinning/stretchable-twistable-bones/"><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/stb.png" alt="stb" loading="lazy"></a></p>
<p><a href="https://igl.ethz.ch/publications/">And many more...</a></p>
<h2 id="libiglacgeometryprocessinglibrary">libigl - A C++ Geometry Processing Library</h2>
<blockquote>
<p>A simple C++ geometry processing library with wide functionality, including construction</p></blockquote>]]></description><link>https://solaire.cs.csub.edu/aestus/libigl/</link><guid isPermaLink="false">6205b93728f3207e6a52706b</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Wed, 21 Feb 2018 10:29:41 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/libigl.png" alt="libigl" loading="lazy"></p>
<p><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/igl.png" alt="Swiss Federal Institute of Technology in Zurich, Switzerland" loading="lazy"></p>
<p><a href="https://igl.ethz.ch/">Institute of Visual Computing of ETH Z&#xFC;rich</a><br>
Prof. Olga Sorkine-Hornung</p>
<h2 id="researchtopics">Research topics</h2>
<ul>
<li>interactive shape modeling and animation,</li>
<li>digital geometry processing,</li>
<li>digital fabrication,</li>
<li>image and video processing</li>
</ul>
<h2 id="publications">Publications</h2>
<p><a href="http://igl.ethz.ch/projects/Laplacian-mesh-processing/Laplacian-mesh-editing/index.php"><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/lse.png" alt="lse" loading="lazy"></a></p>
<p><a href="http://igl.ethz.ch/projects/Laplacian-mesh-processing/sketch-mesh-editing/index.php"><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/silsketch.png" alt="silsketch" loading="lazy"></a></p>
<p><a href="http://igl.ethz.ch/projects/ARAP/index.php"><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/arap.png" alt="arap" loading="lazy"></a></p>
<p><a href="http://igl.ethz.ch/projects/skinning/stretchable-twistable-bones/"><img src="https://solaire.cs.csub.edu/aestus/content/images/2018/02/stb.png" alt="stb" loading="lazy"></a></p>
<p><a href="https://igl.ethz.ch/publications/">And many more...</a></p>
<h2 id="libiglacgeometryprocessinglibrary">libigl - A C++ Geometry Processing Library</h2>
<blockquote>
<p>A simple C++ geometry processing library with wide functionality, including construction of sparse discrete differential geometry operators and finite-elements matrices such as the contangent Laplacian and diagonalized mass matrix, simple facet and edge-based topology data structures, mesh-viewing utilities for OpenGL and GLSL, and many core functions for matrix manipulation which make Eigen feel a lot more like MATLAB</p>
</blockquote>
<p><a href="https://libigl.github.io/libigl/">Github</a> / <a href="http://libigl.github.io/libigl/tutorial/tutorial.html">Tutorial</a></p>
<p>What I like about it:</p>
<ul>
<li>header-only library, but supports external high-performance libraries</li>
<li>Uses <a href="http://eigen.tuxfamily.org/index.php?title=Main_Page">Eigen</a> for matrices and linear algebra (another header-only library)</li>
<li>Has a ton of useful, self-contained, and best of all, working tutorials!</li>
</ul>
<p>In my experience, 99% of the effort in setting up the tutorials was performed by  git and cmake. But compilation time takes a while (sometimes a drawback of header-only libraries), so I&apos;ll go through the setup process, but then I&apos;ll run demos from a precompiled version.</p>
<p>Tutorials to try:</p>
<ul>
<li>106_ViewerMenu</li>
<li>201_Normals</li>
<li>202_GaussianCurvature</li>
<li>205_Laplacian</li>
<li>405_AsRigidAsPossible</li>
<li>605_Tetgen</li>
<li>707_SweptVolume</li>
<li>Taking requests!</li>
</ul>
<p>What you need (Windows):</p>
<ul>
<li><a href="https://git-scm.com/downloads">Git</a></li>
<li><a href="https://cmake.org/">CMake</a></li>
<li><a href="https://www.visualstudio.com/downloads/">Visual Studio 2017 Community</a> (2015 should also work, but 64-bit is a must either way!)</li>
</ul>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Scene and Objects]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>The scene files are changing once again. The progression so far has only been from XML files to json, with changes to object behavior to better align the two. Essentially I want everything serializable for saving state between files.</p>
<p>Right now, the json file just contains a bunch of fields</p>]]></description><link>https://solaire.cs.csub.edu/aestus/scene-and-objects/</link><guid isPermaLink="false">6205b93728f3207e6a527069</guid><dc:creator><![CDATA[Nick Toothman]]></dc:creator><pubDate>Fri, 30 Dec 2016 06:54:09 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>The scene files are changing once again. The progression so far has only been from XML files to json, with changes to object behavior to better align the two. Essentially I want everything serializable for saving state between files.</p>
<p>Right now, the json file just contains a bunch of fields to specify how to load a mesh. Here&apos;s an example:</p>
<pre><code>{
	&quot;name&quot;: &quot;cylinder scene&quot;,
	&quot;meshes&quot;: [
		{
			&quot;name&quot;: &quot;cylinder&quot;,
			&quot;description&quot;: &quot;5 bone cylinder&quot;,
			&quot;path&quot;: &quot;../../data/models/cylinder&quot;,
			&quot;skeletonFile&quot;: &quot;cylinderskeleton.dae&quot;,
			&quot;meshFile&quot;: &quot;cylinder.obj&quot;,
			&quot;tfType&quot;: &quot;mat4&quot;,
			&quot;skinning&quot;: {
				&quot;load&quot;: [ &quot;heatmap-0.45.weights&quot; ],
				&quot;methods&quot;: [ &quot;ir&quot; ]
			},
			&quot;attachment&quot;: {
				&quot;methods&quot;: [ &quot;DirectProjection&quot; ]
			}
		}
	]
}
</code></pre>
<p>I&apos;m used to tweaking this file by hand, then reloading the program. But the last few overhauls made me outgrow this approach (read: I have a better UI that will be faster than this), so instead I&apos;m interested in doing it the &quot;right&quot; way. I&apos;m interested in a file storage system that:</p>
<ul>
<li>maintains a directory for data storage. Textures, skin weights, UV sets, and animations would all be located here. Currently an input mesh knows its root folder (<code>&quot;path&quot;</code> in the above json) so it can load files relative to it. I want to do the same thing for the entire scene&apos;s data.</li>
<li>is binary-formatted. I&apos;m working with meshes, which means lots of sequential data of a known size. Loses human readability, but I&apos;m OK with that. The documentation will likely include a schema for parsing the scene file if necessary. Why not just stick with json text and list out the relevant binary files to load? Well, that makes file history harder to control. I think it&apos;s still OK to have the mesh and skeleton data (skin weights, animations, UVs, etc.) exportable and importable to their own binary format for reuse, but the general scene file should provide this information on its own.</li>
<li>preserves object state. <code>Deform</code> objects are responsible for changing behavior on meshes and skeletons. They&apos;re applied in order of creation and generally support undo/redo operations. Serializing these would allow me to save out command history, but I&apos;m not convinced I want to copy Maya <em>that</em> much just yet. To have the current state preserved would be enough for iterative animation development. Reminder that the software&apos;s goals include supporting fast and easy exploration of ideas</li>
<li>I use a lot of smooth splines created by input device position (list of 2D points on desktop, or 3D points in VR, possibly filtered down with <a href="https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm">RDP</a>, then chained smoothly together as a series of B&#xE9;zier curves), and I want them available in between sessions. Or at least, I want the abstraction that they ultimately provide available in between sessions.</li>
<li>Flexible parsing. I want it to handle gradual changes to the format gracefully. An old scene file should still load if it doesn&apos;t have something a new parser expects.</li>
</ul>
<p>Let&apos;s warm up with some of the types we care most about storing</p>
<ul>
<li><code>Mesh</code></li>
<li>numVertices (<code>size_t</code>), followed by that many <code>MeshVertex</code> values representing the mesh vertices at bind
<ul>
<li><code>MeshVertex</code>: position (<code>vec3</code>), normal(<code>vec3</code>), and uv(<code>vec2</code>). Ordered by index</li>
</ul>
</li>
<li>numFaces (<code>size_t</code>), followed by that many <code>uivec3</code> representing the mesh faces</li>
<li><code>SkinWeights</code></li>
<li>name (<code>string</code>), numVertices (<code>size_t</code>), followed by that many <code>SkinWeight</code> (indices (<code>ivec4</code>), weights (<code>vec4</code>))</li>
<li><code>SkeletonAttachment</code></li>
<li>name (<code>string</code>), mesh(<code>Mesh</code>), and binding (<code>SkeletonBinding</code>)</li>
<li><code>SkeletonBinding</code>:</li>
<li>numVertices (<code>size_t</code>), followed by that many <code>VertexAttachment</code> (boneIndices (<code>ivec2</code>), t (<code>vec2</code>))</li>
<li><code>Skeleton</code></li>
<li>numJoints (<code>size_t</code>), followed by output of the root <code>Transform</code>. Each one processes itself, followed by its children.
<ul>
<li><code>Transform</code>:
<ul>
<li>name (<code>string</code>)</li>
<li>color (<code>vec4</code>)</li>
<li>parent index (<code>int</code>) -1 if no parent</li>
<li>translation (<code>vec3</code>): bind translation</li>
<li>rotation (<code>quaternion</code>): bind rotation</li>
<li>scale (<code>vec3</code>): bind scale</li>
<li>numChildren (<code>size_t</code>), followed by that many child indices (<code>int</code>)</li>
</ul>
</li>
</ul>
</li>
</ul>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>