Best of 2018: How and why video game ‘downgrades’ happen
It’s been a wild year for VG247, so to celebrate we’re going to be republishing some of our favourite work published in 2018 – opinion pieces, features, and interviews, that we’ve enjoyed writing and reading, and which we believe showcase some of our best work. Enjoy!
How and why video game ‘downgrades’ happen was first published on September 4, 2018.
There’s been a lot of chatter lately about video game ‘downgrades’, where the finished game appears graphically inferior to its reveal trailer.
Some see these reductions in scope as a massive middle finger pointed directly at their face, almost as if the developers behind the games were purposely taking away all their graphics. “No graphics for you, gamer boy,” they probably say while sitting on a throne made out of microtransaction money.
Whether it’s the density of The Witcher 3’s world or the puddles in Spider-Man, it’s a topic that pops up more than the textures in the first Rage.
So, let’s dig into exactly what downgrades are and why they happen, with insight from people who actually know what they’re talking about.
Since we all love a bit of drama, why not kick off with some actual bad practices. Sometimes, downgrades are the result of wasted resources, occasionally initiated by a decision from up high, according to one source familiar with development on triple-A Ubisoft games.
“I found they were very insistent on pushing the quality of their products for mass market appeal at global events,” my source tells me. “This leads to a lot of man hours churning through content and pouring over the quality of the game during an intensive period.”
This quality does not scope out to the rest of the game - all we get is a highly polished vertical slice that the developers can’t hope to replicate across an entire sandbox world. For Driver San Francisco, this affected Marin, a late game county that looked nowhere near as nice as San Francisco’s main body. This is a case of the developer putting all the money up front, since only a small portion of players actually see the credits.
Sometimes, instead of rolling with it, the developer decides to scale back the rest of the game instead.
“[Occasionally], the level of parity in the game needs to be identical, therefore you reduce quality to ensure that everything looks of equal quality,” my source explains. “On The Division, Xbox had a deal with Ubisoft, and part of that deal was to ensure that the PS4 version did not look better than it. The resolution, frame rates, and density of assets could have been higher on PS4, but this was vetoed to prevent issues with Microsoft.”
Of course, it’s not always internal politics that are the cause of massive changes to the end product. Video games are complex, malleable things that constantly change and shift across their development. Imagine working on something creative for years only for the finished product to be exactly the same as the original design document: it rarely happens. In fact, lots of high profile games started out life as a different game entirely, and lots never see the light of day at all.
“Games are not created by following a predetermined route along which there are regular milestones of incremental and noticeable improvement,” lighting and VFX specialist Richard Whitelock explains. “If this were true, game development would be so much easier. Instead, games are developed with many constraints that vary over their lifetime, interspersed with course corrections that shift the project’s direction towards a destination that is itself moving in and out of view.
“Along the way to this final destination there are going to be diversions - a typical example being a major push to show the game at an event. Perhaps it wasn’t even initially planned to be exhibited. These are duly created to suit the demands of that vertical slice. The art and tech are both in nascent stages; the team is excited about new features or visual FX and excitedly splashes them around. The area in which the demo takes place is quite constrained, the rest of the game may not be enabled around it yet, so extra elements may be added in. Where screenshot cameras will be has already been planned in advance and lights are placed to specifically make that shot pop.”
We’re looking at a collection of influences, then: the pressure to create something that shows off an unfinished game as if it was done, the ever-changing nature of developing a triple-A game, marketing interference, and the developers figuring out the tools they can use.
“Every new graphical feature you give the team gets massively overused for the first month, be it a new form of ambient occlusion, lens flare, or screen space reflection,” Simon Roth, a game developer with experience in graphics programming and profiling, tells me. “Management go ‘wow’, the art team get excited, and it takes a month for people to realise it's impacting the visuals more than it needs to.”
One of the areas Roth has expertise in - profiling - studies the processing and memory costs of different parts of the graphics pipeline. In other words, this is where optimisation comes into it and things are cut back to squeeze better performance out of your games. Most players will tell you they would rather have a smooth frame rate than better visual fidelity. But when a game is cut back to service this, people lose their minds and tweet hashtags like “#puddlegate”.
“Optimisation is a hard, tedious and expensive thing to do,” Roth explains. “On a triple-A game it's left very late because you don't want to replicate the work. Profiling involves specialist software, taking captures of a game, and then stepping through frame by frame, analysing what elements use what amount of resources. Until that point, people have been working to a budget, aiming for the target framerate, but until the work is profiled you basically know nothing.”
With the specific example of the reduction in puddles for Marvel’s Spider-Man, all of the developers I speak with agree that they were likely stripped back for a different reason entirely: readability. In the original E3 screenshot, puddles dominate the scene. It’s important for the player to be able to parse enemy silhouettes and make out what threats they are up against, so the puddles were likely reduced to make it easier to read.
“I have been working with Mode 7 for the last three years on Frozen Synapse 2,” Whitelock says. “We were very pleased with a lovely new glow effect that really moved the visuals on from the previous title. However, It wasn’t until we began testing that we realised we had gone too far. Feedback from internal and public show builds was that the pretty glow hindered the player’s ability to comprehend the scene - and that is absolutely vital in a tactical game. It took many iterations to find the balance between a suitably luminous cyberpunk terminal display and a precise user interface for tactical planning.”
All of this is why lots of game developers don’t like showing their games too early. It’s why CD Projekt Red held off from showing the Cyberpunk 2077 gameplay demo to the public for a while. Game development is a long process and many things can change for many reasons. Try not to get too upset about it next time, eh? I mean, imagine being upset with how good The Witcher 3 looks. Cry me a puddle.