Le réel danger des romans : la dépendance

Notre société est addictogène : une surconsommation qui tolère voire encourage l’abus ; un individualisme qui prône la performance et l’exigence de bonheur à tout prix ; le rôle de l’image et de l’apparence, l’importance de vivre dans l’immédiateté… Dans cette perspective, rien d’étonnant à ce que pour le seul mois de janvier 2009, l’association e-Enfance ait enregistré plus de 30 appels de parents alarmés, impuissants devant les cas de dépendance aux romans de leurs enfants.

On évoque souvent les dangers des romans sans savoir de quoi on parle : s’agit-il de l’influence des mots, de la violence, ou encore de la difficulté de démêler le réel du virtuel ? Aucune preuve pour répondre à cela. Pourtant, il existe bien un danger réel et omniprésent : la dépendance que cette nouvelle source de plaisir procure. Mais à partir de quand devient-on dépendant ? Et qui cela concerne-t-il ? Le psychiatre Marc Valleur, chef de service de l’hôpital de Marmottan, à Paris, parle de dépendance « quand une personne veut arrêter une conduite sans pouvoir y arriver toute seule ». Et le Dr. William Lowenstein, addictologue et directeur de la clinique Montevideo à Boulogne Billancourt, relève 3 critères de dépendance : « quand on veut, mais qu’on ne peut plus s’arrêter ; quand on sait qu’on est en danger, mais que malgré tout on ne peut s’empêcher et quand l’arrêt produit un mal-être ». C’est à ce moment-là qu’il y a une perte de contrôle et qu’on s’échappe à soi-même. Ainsi, le lecteur pathologique se caractérise par un besoin irrépressible et obsessionnel de lire. Il passe alors très rapidement de l’usage à l’abus jusqu’à la dépendance. Or ce fléau touche en majorité les adolescents et les jeunes adultes, plus particulièrement les adeptes des romans de Marc Lévy.

Ce qui va maintenir le lecteur dans le roman, c’est une volonté de se « réfugier dans le virtuel, pour éviter la réalité » note le Dr. Marc Valleur. Le roman fait office de refuge face à une réalité que les adolescents ne veulent ou ne parviennent plus à affronter. Le Dr. Lowenstein souligne l’urgence à aller consulter un spécialiste pour suivre un traitement psychothérapeutique sans attendre car les conséquences peuvent être dramatiques. L’association S.O.S. Lecteurs publie des chiffres à faire pâlir : « 96,6% des lecteurs et familles sont endettés ; 15,7% d’entre eux divorcent ou se séparent de leur conjoint à cause du roman ; 19,3% de lecteurs ont commis un ou plusieurs délits…». Il est fondamental de prendre en charge la dépendance et il existe des tests en ligne qui permettent de s’auto-évaluer comme celui du Dr. Mark Griffiths, de l’université de Nottingham Trent, pour qui il y a danger à partir de 4 réponses positives à ces questions :

  • L’enfant lit-il tous les jours ?
  • Lit-il souvent pendant de longues périodes ?
  • Il lit pour l’excitation qu’il en retire
  • Il est de mauvaise humeur quand il ne peut pas lire
  • Il délaisse les activités sociales et sportives
  • Il lit au lieu de faire ses devoirs
  • Les tentatives de diminuer son temps de lecture sont des échecs… etc.

S’il ne faut pas diaboliser les romans — d’après une étude nord-européenne, environ 1% des lecteurs seraient concernés par l’addiction — il convient néanmoins que les parents restent extrêmement vigilants quant à l’usage qui est fait du roman ; ils doivent également veiller aux signes annonciateurs, tels que les troubles du sommeil, de l’alimentation, le repli social… et suivre les indications qui leur sont données, comme la classification PEGI.

(Article original)

Fuck you, Microsoft: the sorry state of Visual Studio syntax highlighting

TL;DR: you can’t even choose a different colour for “return” and “float” in Visual Studio.

I mostly use Vim as my everyday editor. This is not the place to go through the details of why, but basically it does exactly what I want. One thing Vim does well for me is syntax highlighting. Here is a bit of Lol Engine C++ using Vim’s default colour scheme:

The colour scheme is simple here:

  • yellow for control flow
  • green for types and qualifiers
  • magenta for constants
  • light grey for everything else

You may notice that the vec3 or quat types, which are not C++ base types but which behave exactly as if, are coloured just like float. This is simply done by adding the following line to my main config file or to a separate configuration file:

au Syntax cpp syn keyword cType vec2 vec3 vec4 quat

Okay now I would like you to read that again.

  • yellow for control flow
  • green for types (including custom types) and qualifiers
  • I get all that shit by adding one single configuration line

And I am going to show you the pain it is to do the same in Visual Studio.

First Visual Studio attempt: usertype.dat

Since at least Visual Studio 2003, the usertype.dat file can be used to add a list of user types. This is still true in Visual Studio 2010. So let’s add those new types to usertype.dat:

vec2
vec3
vec4
quat

Not quite there.

M_PI is not coloured, and if I add it to the list it becomes green instead of magenta, but let’s forget about it for now.

float and const are still yellow and there is no way to fix that! Of course I thought about adding them to the list of user types, but the scanner decides that they’re C++ keywords way before it checks for user types.

Second Visual Studio attempt: a Language Service add-in

Visual Studio add-ins are very powerful. Almost any part of the editor can be modified and augmented. There are hundreds of add-ins available and you can write your own.

This looks promising: you can write your own language handler, but you can certainly modify an existing one quite easily (or so I thought).

To handle a new language, you need to subclass the LanguageService class and you register that new class in Visual Studio. One of the methods you need to implement in the language service is GetScanner:

class MyLanguageService : LanguageService
{
    [...]
    public override IScanner GetScanner(IVsTextLines buffer)
    {
        return new MyScanner(buffer);
    }
}

And the IScanner interface has a ScanTokenAndProvideInfoAboutIt method that is responsible for choosing the colour of tokens:

class MyScanner : IScanner
{
    [...]
    bool IScanner.ScanTokenAndProvideInfoAboutIt(TokenInfo tokeninfo, ref int state)
    {
        [...]
        if (FoundKeyword)
        {
            tokeninfo.Color = TokenColor.Keyword;
            return true;
        }

        [...]
    }
}

This is brilliant. I mean, with such an architecture, you can implement your own sublanguage, add colour schemes, modify whatever you want. Honestly, this is perfect for my needs.

So here is the plan:

  1. find the LanguageService class responsible for parsing C++
  2. inherit from it and reimplement GetScanner() to use our own IScanner
  3. in our version of ScanTokenAndProvideInfoAboutIt, just call the C++ scanner’s version of ScanTokenAndProvideInfoAboutIt and inspect tokeninfo
  4. if the token info says this is a keyword, match that keyword with our list of types, and change its colour if necessary
  5. register our new language service with a higher priority

This sounds pretty simple and elegant. It’s some kind of two-level proxy pattern.

Except it has absolutely no chance to work. Because there is no language service class for C++.

That’s right. Visual Studio does not use its own advertised architecture to handle the C++ language. If you want to slightly change the behaviour of the C++ language service, you need to fully reimplement it. And by fully reimplement, that means fully, even the completion stuff for IntelliSense.

Third Visual Studio attempt: a Classifier add-in

A classifier add-in differs from a regular language add-in in that it only affects the text that is being displayed. It has no knowledge of the language syntax or structure or what the underlying parser has analysed, but it does know about what the underlying classifier did. For instance, it doesn't know whether a given chunk of text is a C-style or a C++-style comment, but it does know that it was classified as "comment".

This proved to be the correct thing to use! My Visual Studio colour scheme now looks a lot more like my Vim setup:

There are still limitations, but it's a good start. When another plugin comes in and has higher priority, it undoes everything my add-in did, which is arguably the fault of those other plugins, but I believe the lack of a properly pluggable architecture is definitely the issue.

Further thoughts

I know this is a rant, but I will nonetheless add my own constructive information here, as well as anything readers may wish to contribute.

There are other paths I have not explored yet:

  • disassemble the Visual Studio DLLs

I am pretty sure people will suggest that I use VAX (Visual Assist X). I am already using it. I am even a paying customer. In fact I asked for that feature more than three years ago and was more or less ignored (the only answer I got was about a minor point where the person thought I was wrong — I wasn’t). While most of the bugs I reported against VAX were fixed, I have a problem with their stance on accessibility, illustrated by their attitude on this bug. My general feeling is that VAX is a pathetic, slow and annoying piece of crap. The only reason I do not rant more about it is that I know how painful it is to write Visual Studio extensions.

I asked for advice on StackOverflow but since my problem is very specific and probably has no solution, it’s not surprising that I haven’t got any answers yet.

Someone wanted to extend the syntax colouring but was told that apparently “this can't be accomplished with an add-in”, “you may be looking at implementing a full language service to provide this feature” and “Todays language services are currently not architected to be extensible”. One suggestion was to replace a whole COM object using only its CLSID.

Another person wanted to leverage existing language services from within Visual Studio in order to use it for his own language, and was told it was not possible. The workaround mentioned in that thread involves creating a whole new virtual project that would mirror files, hide them, rename them to .c or .h, and analyse the result.

Conclusion

Honestly, the only reasons I still use Visual Studio are:

  • I use it at work
  • a lot of people use it and I need to provide them with a usable environment
  • there’s no other acceptable way to develop for the Xbox 360

But given how it sucks, and has sucked for years, and made my life miserable, and how some of the bugs I have reported back in 1997 are still present, I can only hope that this pathetic piece of crap either becomes opensource (wishful thinking) or just dies and we get something really extensible instead.

Better function approximations: Taylor vs. Remez

You may have once crossed this particular piece of magic:

$\sin(a) = a - \dfrac{a^3}{3!} + \dfrac{a^5}{5!} - \dfrac{a^7}{7!} + \dfrac{a^9}{9!} + \dots$

The right part is the Taylor series of sin around 0. It converges very quickly to the actual value of sin(a). This allows a computer to compute the sine of a number with arbitrary precision.

And when I say it’s magic, it’s because it is! Some functions, called the entire functions, can be computed everywhere using one single formula! Other functions may require a different formula for different intervals; they are the analytic functions, a superset of the entire functions. In general, Taylor series are an extremely powerful tool to compute the value of a given function with very high accuracy, because for several common functions such as sin, tan or exp the terms of the series are easy to compute and, when implemented on a computer, can even be stored in a table at compile time.

Approximating sin with Taylor series

This is how one would approximate sin using 7 terms of its Taylor series on the [-π/2, π/2] interval. The more terms, the better the precision, but we’ll stop at 7 for now:

static double taylorsin(double x)
{
    static const
    double a0 =  1.0,
           a1 = -1.666666666666666666666666666666e-1,  /* -1/3! */
           a2 =  8.333333333333333333333333333333e-3,  /*  1/5! */
           a3 = -1.984126984126984126984126984126e-4,  /* -1/7! */
           a4 =  2.755731922398589065255731922398e-6,  /*  1/9! */
           a5 = -2.505210838544171877505210838544e-8,  /* -1/11! */
           a6 =  1.605904383682161459939237717015e-10; /*  1/13! */
    double x2 = x * x;
    return x * (a0 + x2 * (a1 + x2 * (a2 + x2
             * (a3 + x2 * (a4 + x2 * (a5 + x2 * a6))))));
}

And you may think…

/raw-attachment/blog/2011/12/14/understanding-motion-in-games/derp.png “Oh wow that is awesome! So simple for such a difficult function. Also, since I read your masterpiece about polynomial evaluation I know how to improve that function so that it is very fast!”

Well, actually, no.

/raw-attachment/blog/2011/12/14/understanding-motion-in-games/no-rage.png

If you are approximating a function over an interval using its Taylor series then either you or the person who taught you is a fucking idiot because a Taylor series approximates a function near a fucking point, not over a fucking interval, and if you don’t understand why it’s important then please read on because that shit is gonna blow your mind.

Error measurement

Let’s have a look at how much error our approximation introduces. The formula for the absolute error is simple:

$\text{E}(x) = \left|\sin(x) - \text{taylorsin}(x)\right|$

And this is how it looks like over our interval:

You can see that the error skyrockets near the edges of the [-π/2, π/2] interval.

/raw-attachment/blog/2011/12/14/understanding-motion-in-games/derp.png “Well the usual way to fix this is to split the interval in two or more parts, and use a different Taylor series for each interval.”

Oh, really? Well, let’s see the error on [-π/4, π/4] instead:

I see no difference! The error is indeed smaller, but again, it becomes extremely large at the edges of the interval. And before you start suggesting reducing the interval even more, here is the error on [-π/8, π/8] now:

I hope this makes it clear that:

  • the further from the centre of the interval, the larger the error
  • the error distribution is very unbalanced
  • the maximum error on [-π/2, π/2] is about 6.63e-10

And now I am going to show you why that maximum error value is pathetic.

A better approximation

Consider this new function:

static double minimaxsin(double x)
{
    static const
    double a0 =  1.0,
           a1 = -1.666666666640169148537065260055e-1,
           a2 =  8.333333316490113523036717102793e-3,
           a3 = -1.984126600659171392655484413285e-4,
           a4 =  2.755690114917374804474016589137e-6,
           a5 = -2.502845227292692953118686710787e-8,
           a6 =  1.538730635926417598443354215485e-10;
    double x2 = x * x;
    return x * (a0 + x2 * (a1 + x2 * (a2 + x2
             * (a3 + x2 * (a4 + x2 * (a5 + x2 * a6))))));
}

It doesn’t look very different, right? Right. The values a0 to a6 are slightly different, but the rest of the code is strictly the same.

Yet what a difference it makes! Look at this error curve:

That new function makes it obvious that:

  • the error distribution is better spread over the interval
  • the maximum error on [-π/2, π/2] is about 4.96e-14

Check that last figure again. The new maximum error isn’t 10% better, or maybe twice as good. It is more than ten thousand times smaller!!

The minimax polynomial

The above coefficients describe a minimax polynomial: that is, the polynomial that minimises a given error when approximating a given function. I will not go into the mathematical details, but just remember this: if the function is sufficiently well-suited (as sin, tan, exp etc. are), then the minimax polynomial can be found.

The problem? It’s hard to find. The most popular algorithm to find it is the Remez exchange algorithm, and few people really seem to understand how it works (or there would be a lot less Taylor series). I am not going to explain it right now. Usually you need professional math tools such as Maple or Mathematica if you want to compute a minimax polynomial. The Boost library is a notable exception, though.

But you saw the results, so stop using Taylor series. Spending some time finding the minimax polynomial is definitely worth it. This is why I am working on a Remez framework that I will make public and free for everyone to use, modify and do what the fuck they want. In the meantime, if you have functions to numerically approximate, or Taylor-based implementations that you would like to improve, let me know in the comments! This will be great use cases for me.

Understanding basic motion calculations in games: Euler vs. Verlet

During the past month, I have found myself in the position of having to explain the contents of this article to six different persons, either at work or over the Internet. Though there are a lot of articles on the subject, it’s still as if almost everyone gets it wrong. I was still polishing this article when I had the opportunity to explain it a seventh time.

And two days ago a coworker told me the source code of a certain framework disagreed with me… The kind of framework that probably has three NDAs preventing me from even thinking about it.

Well that framework got it wrong, too. So now I’m mad at the entire world for no rational reason other than the ever occurring realisation of the amount of wrong out there, and this article is but a catharsis to deal with my uncontrollable rage.

A simple example

Imagine a particle with position Pos and velocity Vel affected by acceleration Accel. Let’s say for the moment that the acceleration is constant. This is the case when only gravity is present.

A typical game engine loop will update position with regards to a timestep (often the duration of a frame) using the following method, known as Euler integration:

Particle::Update(float dt)
{
    Accel = vec3(0, 0, -9.81); /* Constant acceleration: gravity */
    Vel = Vel + Accel * dt;    /* New, timestep-corrected velocity */
    Pos = Pos + Vel * dt;      /* New, timestep-corrected position */
}

This comes directly from the definition of acceleration:

\[a(t) = \frac{\mathrm{d}}{\mathrm{d}t}v(t)\]
\[v(t) = \frac{\mathrm{d}}{\mathrm{d}t}p(t)\]

Putting these two differential equations into Euler integration gives us the above code.

Measuring accuracy

Typical particle trajectories would look a bit like this:

These are three runs of the above simulation with the same initial values.

  • once with maximum accuracy,
  • once at 60 frames per second,
  • once at 30 frames per second.

You can notice the slight inaccuracy in the trajectories.

You may think…

“Oh, it could be worse; it’s just the expected inaccuracy with different framerate values.”

Well, no.

Just… no.

If you are updating positions this way and you do not have a really good reason for doing so then either you or the person who taught you is a fucking idiot and should not have been allowed to write so-called physics code in the first place and I most certainly hope to humbly bestow enlightenment upon you in the form of a massive cluebat and don’t you dare stop reading this sentence before I’m finished.

Why this is wrong

When doing kinematics, computing position from acceleration is an integration process. First you integrate acceleration with respect to time to get velocity, then you integrate velocity to get position.

\[v(t) = \int_0^t a(t)\,\mathrm{d}t\]
\[p(t) = \int_0^t v(t)\,\mathrm{d}t\]

The integral of a function can be seen as the area below its curve. So, how do you properly get the integral of our velocity between t and t + dt, ie. the green area below?

It’s not by doing new_velocity * dt (left image).

It’s not by doing old_velocity * dt either (middle image).

It’s obviously by doing (old_velocity + new_velocity) * 0.5 * dt (right image).

And now for the correct code

This is what the update method should look like. It’s called Velocity Verlet integration (not strictly the same as Verlet integration, but with a similar error order) and it always gives the perfect, exact position of the particle in the case of constant acceleration, even with the nastiest framerate you can think of. Even at two frames per second.

Particle::Update(float dt)
{
    Accel = vec3(0, 0, -9.81);
    vec3 OldVel = Vel;
    Vel = Vel + Accel * dt;
    Pos = Pos + (OldVel + Vel) * 0.5 * dt;
}

And the resulting trajectories at different framerates:

Further readings

“Oh wow thank you. But what if acceleration is not constant, like in real life?”

Well I am glad you asked.

Euler integration and Verlet integration are part of a family of iterative methods known as the Runge-Kutta methods, respectively of first order and second order. There are many more for you to discover and study.

  • Richard Lord did this nice and instructive animated presentation about several integration methods.
  • Glenn Fiedler also explains in this article why idiots use Euler, and provides a nice introduction to RK4 together with source code.
  • Florian Boesch did a thorough coverage of various integration methods for the specific application of gravitation (it is one of the rare cases where Euler seems to actually perform better).

In practice, Verlet will still only give you an approximation of your particle’s position. But it will almost always be a much better approximation than Euler. If you need even more accuracy, look at the fourth-order Runge-Kutta (RK4) method. Your physics will suck a lot less, I guarantee it.

Acknowledgements

I would like to thank everyone cited in this article, explicitly or implicitly, as well as the commenters below who spotted mistakes and provided corrections or improvements.

Fuck you, Microsoft: reloading projects in Visual Studio

I usually develop using the default Release configuration setting in Visual Studio. It generates faster code and smaller binaries. It’s a perfectly sane thing to do, and most people do the same.

Sometimes, however, something wrong happens and I need to switch to Debug mode:

Then, in order to debug what went wrong, I open several source files, set a lot of breakpoints, study the program flow:

Quite often, I wonder: “did my coworkers commit any code that might be relevant to my problem?” and I synchronise my tree:

git pull --rebase

Or:

p4 get

This is where the nightmare begins.

If Core.vcproj was modified, the following modal dialog appears:

I click Reload. Then, if Engine.vcproj was modified, the following modal dialog appears:

Can you see where this is going? For each of the 50 projects in my solution that were modified, a modal dialog appears and I have no way to say “Yes to All”. Each and every single dialog appears.

When projects are finally reloaded, my tab line looks like this:

Fucking Visual Studio closed all the open tabs from the projects it reloaded! I have no way to reopen them as they were.

Anyway. I press F7 to rebuild the solution, and go take a drink, or switch to another task. A build takes several minutes.

When I come back, I notice this:

Fucking Visual Studio automatically switched my configuration mode back to Release! I just lost minutes of work because I needed a debug build, not a release build. And that tiny check box becomes another thing I need to constantly check in case the software attempts to change it behind my back.

So fuck you, Microsoft, for failing to handle project reloads in even the most slightly user-friendly way. And I shall no buy the “it is not a trivial thing to do” argument. When I close Visual Studio before syncing the tree, then open it again afterwards, I get no annoying avalanche of modal dialogs, my settings are in the expected configuration, and previously opened files are still here in tabs. I would totally do it if it didn't take several minutes to close and reopen the memory hog.

iTunes Store parasites

So I was looking whether the name Monsterz was already taken on the iTunes app store. Unfortunately, it was: synx.us’s Monsterz has been there for a while. It’s a rather poor game where “the aim of the game is to push as many tiles off the screen as possible within the time limit”, but well, that’s the rule: first-come, first-served.

However, I had a look at another linked application by synx.us, and discovered Baseballz, which looked quite similar. Of course it looked similar: in Baseballz, “the aim of the game is to push as many tiles off the screen as possible within the time limit”. Oh well, why not? Who would call a game Baseballz anyway?

Then I found Horrorz, where “the aim of the game is to push as many tiles off the screen as possible within the time limit”. In Soccerz, “the aim of the game is to push as many tiles off the screen as possible within the time limit”. Then came Flagz, Golfz, Razing and Eazter. Eazter? Seriously, what the fuck? And there is more! Tenniz, Valentines, St Patrick’s Day, Blox, Jox, Bikez and fucking Footballz. All these games have the exact same gameplay and even the same title screen text.

Congratulations, Apple. You won’t allow an Obama trampoline jumping app yet fifteen times the same fucking application from obvious namespace parasites is OK.

Fuck you, Microsoft: the environment variable windows

Why, oh why, is the environment variable window (accessible from the system preferences) such an atrocious experience, beyond the limits of human pain tolerance?

Why does the window not have a fucking resize handle? Why do I have to click on scrollbar handles smaller than my fucking mouse pointer to browse my environment variables? Why is there no way to search or replace strings? Why is the intern who wrote this GUI probably a top executive now?

Fuck you, Microsoft: near and far macros

If you target the Windows platform, chances are that your code will have this:

#include <windows.h>

Which in turns includes <windef.h>, which unconditionally defines the following macros:

#define far
#define near

Right. Because there’s no chance in hell that, writing 3D code for Windows, someone’s gonna name any of their variables near or far. Never happens. Never will.

Fuck you, Microsoft, for not even providing a way to disable that monstrosity with a global macro such as NOFUCKINGMACROSFROMTHEEIGHTIES but instead requiring me to #undef those macros after each inclusion of <windows.h>. And it’s not like you don’t know how to do that, because you provide NOMINMAX which deactivates your min() and max() macros in the same fucking file. Fuck you for silently breaking code that compiles cleanly on every platform, Mac OS X, Android or the Playstation.

I refuse to be swayed by your terror tactics and name my variables m_fNearPlaneClipDistance or whatever deranged mind decides is better. My near and far values are called near and far, because I love this naming scheme, and if you don’t, fuck you and your fat wife.