Google's annual developer conference Google IO just took place. And as a big part of the conference was dedicated to Virtual Reality and the unveiling of Google's new VR platform: Daydream I've been contemplating around what this means for VR in general and mobile VR in particular.
So what is Daydream? Daydream is, in essence, a specification for a headset and input device together with a number of requirements device manufacturers need to follow to have their devices Daydream certified. Being Daydream certified means the device is capable of providing a great VR experience similar to what Oculus GearVR offers.
Daydream also comes with functions built into the upcoming version of Android such as providing users with a virtual hub where they can launch their apps and games.
More information about Daydream can be found here: https://vr.google.com/daydream/
So why is Daydream and it's presence at Google IO a big deal? When a company of Google's magnitude show this kind of comittment to VR it's a fantastic recognition of the platform itself but it also shows how important VR is for Google. This sends signals to the developer community which helps developers embrace development of VR content and grow the VR market.
In a not very distant future, millions of people will have Daydream certified phones, which together with a cheap headset provides a complete VR setup.
It's not a wild guess that Daydream will be the stepping stone for many users who will have their first VR encounter with a Daydream certified phone.
We have already seen proof of this with GearVR. 6 months post launch and GearVR already have 1 million users according to Oculus. With Daydream coming the mobile VR market will grow rapidly.
Even though mobile VR can't compete with Desktop VR in terms of visual fidelity, the vast majority of users will find mobile VR experiences good enough just as they already do with gaming and apps today.
But what sets mobile VR apart from ordinary mobile games is that the immersion mobile VR provides is almost as good as what you get from PC/Console VR, whereas most mobile games today are quite different from PC/Console games.
So Daydream is a big deal because it will help drive adoption of VR regardless of platform. And this is awesome!
It's been a while since I blogged. Mostly because I haven't gotten around writing things down. So this time I wrote a bit more.
This post summarises a little side project I’ve been playing with lately: how to enable positional tracking for mobile devices, but I also took the opportunity to share some thoughts around why I’m passionate about VR in general and mobile VR in particular.
Before I go into what I did and how it works I want to make it clear that I make no claims to have solved mobile positional tracking or that I've done something new. I didn't find any blogposts describing the same setup I went with and I did get to something that works well enough to prove the point I was setting out to do so I wanted to share.
Background
As a mobile evangelist and a big VR enthusiast, combining the two is obvious. For me mobile VR, or portable VR I should say, is the area that excites me the most. Whenever I think of the future possibilities portable VR and its portable nature can enable my mind literally melts. The visual experience can’t compete with what stationary VR can offer. But just as with mobile gaming in general, the portability possibilities often overweigh visual fidelity in my mind.
Stationary VR (PC/Console) in itself is absolutely amazing and cannot be described well enough with words; it must be experienced. The feeling of being transported into a different world simply by putting on a headset and having your brain accept the virtual reality is a fantastic feeling and justify spending all those dollars to get a decent rig + headset(s).
But there are some limitations with stationary VR, which as the name implies, comes down to portability.
One obvious thing is wires. With stationary VR your head is physically attached to a computer using cables, which besides from sometimes being in your way, means you are limited to experience VR to the place where the computer is. Also the stationary nature of the computer itself means you are further limited to the physical space where the computer is even if the headsets become wireless, which I assume they will.
Another thing is room scale tracking, i.e. being able to walk around having the physical movement of the body reflected in VR. Today only HTC Vive offers room scale tracking out of the box. The solution HTC Vive offers is based around a stationary setup, which again limits the VR experience to the physical place where the setup exists.
Enter mobile. Right out of the box the problem with wires is solved as the experience is all running on your mobile device inserted. But mobile VR lacks a couple of features, which are key to why stationary VR so good.
A main reason for the brain to buy the illusion of VR is to correctly track (at least) the head’s positional movement in addition to the head’s rotation. When the physical movement and virtual movement match the brain is happy and buys the illusion.
Positional tracking is something mobile VR generally lacks as most solutions only tracks the head rotation. Various people are trying to solve the problem of positional tracking for mobile and the remainder of this post is about my experiment of enabling full body positional tracking for mobile.
The experiment
A common approach to solve positional tracking is Simultaneous Localization And Mapping (SLAM) using the camera and motion sensors in the mobile phone to track the user’s movement. With SLAM all tracking computations takes place on the mobile device, which can compete with the actual experience for valuable CPU resources.
The route I explored was to use a depth camera placed to overlook the play area and offload all tracking computations to a separate device. The tracking data is then streamed to the mobile device over either WiFi or Bluetooth.
The frustum represents what the depth camera sees. The circle represents the user
With this setup it’s possible to do full body tracking of more than one person without it having a performance impact on the mobile device as all heavy computations are performed elsewhere.
The main concern I had going into the project was what impact the latency introduced from streaming the positional data would have on responsiveness. Would it feel sluggish moving around? Best way to find out is to try.
In this experiment I only wanted to test the theory if streaming positional data is an option so I quickly settled for off-the-shelf products. In this case it meant using a Kinect v2 connected to a desktop PC. Not only is the Kinect a great device but it also comes with an SDK with built-in support for full skeleton tracking! This allowed me to simply use the SDK to get tracking data and focus most of my time on the streaming and latency parts.
To visualize and test the tracking data I used Unity to build a simple test scene to walk around in.
To allow me to iterate quickly and test easily I started out running everything locally on the PC using an Oculus DK1 instead of a mobile headset. The benefit of using the DK1 was that it too lacks positional tracking giving me a representative test setup.
To make things easy I decided to stream positional data over a local network using either TCP or UDP rather than Bluetooth. Again this was the easy setup to get to results quickly. I figured that if I needed to I could always test a Bluetooth setup at a later stage. An added benefit of using standard network transfer was that it enabled multiple clients to consume the same data simultaneously.
I approached the streaming problem similar to how networking of gameplay objects works in multiplayer games. On the server side the Kinect is sampled frequently ~60 Hz to make sure the server has recent data available. On the the client a different frame rate can be used to consume the data depending on the clients requirements. Typically I went with 30 Hz for the client as that matched the Kinect frame rate.
The amount of data being transfered is very small. A full Kinect skeleton is 25 joint positions = 75 floats = 300 bytes. At 30 Hz that’s around 1K of data to transfer per second. To get the bandwidth requirement down the data could be compressed, or chose a smaller sets of joints to transfer. As my tests took place on a local network with high throughput this wasn’t an issue and I preferred to have the full skeleton transferred.
Screenshot from PC
Using a simple TCP client/server I was quickly able to get a demo up that samples skeleton data from the Kinect in one process and have it streamed to the game client running in a separate process. With full skeleton data available it was time to test positional tracking. This was easily achieved by positioning the VR camera relative to the position of the head joint. Together with the built in head rotation tracking together with the streamed head position I now had full head tracking. As it all ran on the same machine latency wasn't an issue and the tracking worked remarkably well.
As I was curious on the latency, I created a set of test suites to measure transfer latency. I wanted to know what the difference was between TCP and UDP and if there was any difference given the small amount of data and “perfect” conditions with a local network. To limit the amount of things that could interfere I created two command line test clients in C# as well as using the Unity prototype. I chose C# so that I could reuse the same code in the Unity prototype. In addition to the C# tests I also made a C++ version to rule out any issues with .Net.
All tests except the iOS one was made on the PC, meaning all network traffic was using the loopback interface. I was a bit surprised by the results:
TCP Client (C# command line): ~0.25-0.5 ms latency
UDP Client (C# command line): ~2 ms latency
TCP Client (C++ command line): ~3 ms latency
TCP Client (Unity): ~14 ms latency
TCP Client (Unity iOS): ~15 ms latency
The differences between the various command line tests can probably be explained by implementation details, but what surprised me was the high latency when using the same code that yielded the lowest latency also yielded the highest when running in Unity. Interesting to note is the small increase between running the test inside of Unity on the local host compared to running it on a mobile device. Only 1 ms increase...
At this point I'm assuming I've done something wrong in my code. Something to investigate in the future.
Anyway, even though I have some latency the test was successful as can be seen in the video below. It was recorded on an iPhone 6 and as the video shows the skeleton is tracked pretty accurate and smooth. The camera is offset so that I could see the skeleton. Apologies for my robot-like movement. I guess recording video, hold a phone and move at the same time was too much ;)
Conclusion
The experiment clearly showed that it's possible to use an external device to do full body tracking and consume the data on a mobile device. Even though this was only a first small step I learned a lot and have a nice list of possible improvements.
A natural next step would be to investigate why I get such a big difference in latency when using Unity (e.g. fix my code), but there are many other improvements that can be done. Such as interpolate joint positions when streaming at lower frequencies.
Naturally it would be interesting to investigate what it would take to create a mobile setup to make it a true mobile experience.
But that's for another blog post another time.
Latency Updates (March 13th, 2016)
After this blog post was originally posted I've looked into the latency issues I had with Unity and have gotten latency down to microseconds levels by implementing a native plugin solution that utilizes Grand Central Dispatch (GCD) to keep networking off any Unity threads.
I don't know why I like gamepads, but I do. Maybe it's because they have been part of my life since my first Nintendo Game & Watch back in 1982 up until today's 3DS. Maybe it's the tactical feedback when pushing buttons (and blisters after too much pressing?). Anyway, gamepads is my preferred way of playing the games I like.
I also love mobile gaming as it have such awesome capabilities of creating games that I can carry with me wherever I am.
For years I was arguing for proper gamepad support for iOS so I was extremely happy when it was announced during WWDC earlier this year. Sure it might only appeal to a fragment of iOS players but for me it's the point where the games I like to play finally can see the light on the mobile platform I carry in my pocket and turn it into the gaming machine I wanted it to be for so long. An a similar topic, don't get me started about the possibilities of Apple-TV ;)
If you've read my blog before you also know I like building my own gamepads! My last attempt resulted in a perfectly working iCade-compatbible Wii classic controller but it's not something you carry around.
So far the mobile game controllers I've seen have lacked the look and feel of the core hand held gaming devices made by Nintendo and Sony (for instance) that I'm used to. So inspired by the new MFI game controller capabilites and the lack of design I've started a new gamepad project with the formfactor of a Nintendo DS. I'm just hoping someone makes an MFI-gamepad like the image below.
Naturally I'm not part of the MFI program so I will have to stick with either an iCade or Bluetooth HID compatible gamepad. The first version won't make use of a second screen just to keep things simple.
The plan (roughly) is something like this:
Get hold of a second NDS. I don't want to sacrifice my own!
Open and remove everything to get an empty case.
Make prototype PCBs for the new electronics and controller buttons.
Implement prototype
Create final PCB and mount in case
Component wise I will be using
Sparkfun BlueSmirf HID for bluetooth connectivity
Arduino Nano as micro controller (prototyping)
Digispark for "production"
IC 4051 as mux for digital input (to save input pins)
Custom made PCB to hold components and dpad and buttons
This post is dedicated to Ben Heck as he inspired me to actually start the project :)
If you follow me on Twitter or have read my blog before, you know I miss proper gamepad support for iOS devices for various reasons. I want to point out that I don't think touch is a bad idea. It all depends on the context. A gamepad is great when using a device in the context of a console.
Anyway, as a natural result of my interest I keep myself updated with available pad solutions. One of my favourites is the iCade system and when I looked at Ben Heck's episode about rebuilding an iCade to an iPhone strap-on I decided to build my own iCade compatible gamepad by looting parts from other devices. It was also the perfect opportunity to do some micro controller coding! Making my gamepad iCade compatible also meant I had lots of games to test with and could concentrate on the pad alone.
iOS and external input
One way that iOS supports external input is in the form of bluetooth keyboards. The iCade appears as a bluetooth keyboard to iOS devices and sends characters representing the stick movement and button presses.
Normally (when using keyboards for game input) there's a "getKeyboardState" function giving you the state of all keys on the keyboard; which are pressed and which aren't. iOS uses an event based system by calling an "insertText" function when keys are pressed but give no hints on when keys are released. To work around this iCade sends one character when a button is pressed and another character when the button is released. A super simple system that is easy to implement and parse but also works nicely with various emulators. More information about the iCade way can be found in the iCade Developers Resource.
As I wanted my gamepad to appear as a bluetooth keyboard, I bought the cheapest bluetooth keyboard I could find at my local dealer for about 20€ and started looting it on its bluetooth module.
Once I found the bluetooth module, I soldered some wires to the keyboard matrix connectors so that I could mount it to my breadboard and start communicate with it.
Microcontroller
The microcontroller is used to make all the logic decisions such as reading input and send characters using the bluetooth module. My choice of microcontroller is an Arduino Nano (ATmega 328: 16MHz, 32K Flash, 2K SRAM, 1K EEPROM) with 8 analog inputs, 14 digital input (6 can be output). Programs (sketches) are coded with a small subset of C/C++ and are compiled with avr-gcc. Least but not last it comes with an IDE ready to rock :)
Input
For my first version of the project I used a thumbstick similar to the ones found in console gamepads together with tactile buttons I purchased from my local electronics store and mounted on my breadboard. It was great for initial development but sucked from a gameplay perspective as a breadboard isn't exactly ergonomically correct ;) My initial thought was to use one of my xbox controllers and solder wires to the connectors, but luckily I discovered that the Wii Classic Controller uses I2C which made it the perfect candidate for the Arduino as it comes with I2C support through its Wire package! If you're not familiar with I2C and don't want to read the wikipedia page, I2C is a serial interface allowing slave devices (such as the Wii nunchuck or classic controller) to communicate with a master device (such as the Wiimote) using only 2 wires. Very microcontroller input port friendly as it allows all buttons and sticks to be interfaced using only two input ports.
Recreating the matrix
Since keyboards contains a lot of keys it would be cumbersome if every key needed its own wire to trigger input as that would lead to a lot of wires. Instead keyboards use a keyboard matrix which essentially is a grid of wires organised into columns and rows. When a key is pressed a column and row is short circuited generating a key stroke.
Let's assume we have a 61-key keyboard. With a grid of 8 rows by 8 columns we can address all keys.
In our case of the iCade it has 12 buttons that requires 24 unique characters but we only have 6 outputs on the Arduino. We also need to short circuit the rows and columns on the bluetooth module to generate a key stroke (just as the original keyboard did). What components could we use to accomplish this?
IC 4051 to the rescue!
The 4051 is a circuit that can be used as a demultiplexer (demux) which can be seen as a dynamic switch. It is capable of addressing 8 lines using 3 input ports which is kind of perfect looking at the 6 available outputs on the Arduino. By using two circuits (one for the rows and one for the columns) we are able to recreate a big portion of the keyboard matrix allowing us to send a lot of characters. The actual short circuiting is accomplished by a nice feature of the 4051: the common I/O port. By connecting the two 4051:s together through their common I/O ports, when one line on each circuit is opened, the result is a closed circuit.
Connecting it all together
After putting all the hardware on a breadboard I ended up with this:
Not exactly the form factor worthy an iOS device but it works ;)
The logical view is something like this:
The actual code is dead simple. Read input from the Wii controller and send characters by opening the lines on the 4051:s. To interface with the Wii Classic Controller I used the WiiClassicController library from Arduino Playground as a starting point. Some example code below used to send a character.
void sendKeyboardKey(const char ch) { int col = getPinForColMatrix(ch); int row = getPinForRowMatrix(ch); if ((col & 0x01) != 0) digitalWrite(4, HIGH); if ((col & 0x02) != 0) digitalWrite(5, HIGH); if ((col & 0x04) != 0) digitalWrite(6, HIGH); if ((row & 0x01) != 0) digitalWrite(7, HIGH); if ((row & 0x02) != 0) digitalWrite(8, HIGH); if ((row & 0x04) != 0) digitalWrite(9, HIGH); }
Below is a small video showing the gamepad in action. The game being used in the demo is Neoteria by Orange Pixel. It's a great game! It's just me not being any good at it. Yet :)
Conclusion
It's been a really fun project to do, but the current implementation suffers from two major drawbacks:
The iCade setup only support digital inputs limiting its use
My setup suffers from input latency as the matrix needs to be short circuited for ~4ms to issue a key stroke. It gets apparent when multiple buttons are pressed at once.
Both issues will be addressed in a separate blog post. I'm waiting for some circuits to arrive :)
Intro I started my game dev career looking after the networking code in Battlefield 2142. Throughout the years (and engines) the networking model of authoritative servers has been the natural choice as I've been working on multiplayer shooters. As with any model it has its strengths and weaknesses. One of the weaknesses is that it's rather complex to get it right. This spurred me to start experimenting with more simpler models. Or really the simplest model possible (?).
My goal was to write as little (net)code as possible and see where I ended up. Would the differences be noticable? What strength and weaknesses would the model I ended up with have?
So what's up with the title? Well it turns out I'm violating the most sacred rule of multiplayer networking...
Worth noting is that I'm not claiming to have invented something new and revolutionary. Consider this post as a travel journal :)
Rule no 1: Never trust the client I started my project with the ambition to use a networking model suitable for coop. The sole reason was that I could completely ignore the concept of cheaters. Friends playing together don't cheat on each other. Right? And by ignoring cheaters I could break rule number 1 in multiplayer networking: never trust a client. Any data originating from a game client can be tampered with. In an authoritative setup the "only" thing that can be tampered with is player input. And even though that's enough to create a whole range of cheats the impacts are limited. For some reason I ended up testing my model not in coop but in multiplayer...
In a normal setup the game client sends its input to the game server. The game server runs a full simulation of the game moving objects, applying players' inputs on their controlling objects and sends the results to all clients of the game and they update the respective object states. We call this ghosting or replication of objects (aka ghosts). The controlling client don't wait for the server to respond as that would introduce latency as you would have to wait ping time before you input actions got a response. Instead the client predicts what's going to happen and when the response from the server comes it corrects the actions based on what the server responded with. Correction and prediction are two central concepts in authoritative multiplayer networking for making sure objects are positioned at the same place at the same (networking) time on all clients and server.
In my setup I did a bit of the reverse. I use authoritative clients (common in racing games). Instead of sending my input to the server to do the simulation, I run my character's simulation on the client and send its transform to the server, which relays it to the other clients. This totally removes the need of client prediction and correction as the controlling player will get a perfect simulation and the server/other clients will get the result of that simulation. Code wise this offers a very slim implementation as the code needed literally is only a few lines of code of sending/reading character position and direction.
As with any networking model the send/receive frequency is much lower than the framerate so to provide smooth movement interpolation/extrapolation is applied on replicated objects.
Also worth noting is that I only used authoritative clients on objects directly controlled by the player such as the player itself or networked projectiles. All other networked objects (such as NPCs, movable crates) are simulated on the server and ghosted to the clients.
Dealing damage Moving characters are nice and all. But as I was building an FPS I needed projectiles, hit detection, and damage models. In an authoritative server model everything is controlled by the server (hence the name...). What happens when a player fires a gun is similar to the moving character scenario above, i.e. the client predicts what's going to happen (fire gun) but the server controls the outcome (fire weapons, deal damage). This means that (especially in high latency scenarios) the client and server can get out of synch for a short period of time before the client is corrected by the server. This is often noticable as you fire at your opponent, you get an impact effect, but no damage is dealt. To cater for latency the concept of latency compensation is used where the server takes into account the player's latency when doing hit detection (for instance). If latency was a constant this wouldn't be a problem, but as latency is very volatile (ms difference) you can't be 100% accurate.
In my setup I let the controlling client decide over its own projectiles. When a player fires a weapon, the projectile is simulated on the client. The client also performs hit detection. So far this is similar to an authoritative server setup (except no prediction/correction is needed). The difference is that the client will also request that damage should be dealt. It do so by sending a "deal damage" request to the server, which applies damage and replicates the updated health state of the object to all clients. The effect is that when a player fires a weapon and hits a target, damage will be dealt.
From a network code perspective this resulted in super simple code again as the only thing networked is a replay buffer of commands (e.g. fire, fire, reload, zoom in, fire, zoom out, reload, etc) together with simple messages requesting damage to be dealt. The other clients just had to replay the buffer. No weapon simulation needed.
The result To be able to measure some results I implemented a simple telemetry system to track players' positions when they got killed together with the killers' positions. The data was saved to a file that the participants sent me. I had a number of playtests to get some input data. And to be fair I've only tested the setup in a low latency environment as I wanted to see if my network model would break immediately or not.
The tests were done in 1-on-1 fights so that it would be apparent where the opponent was during a fire fight. After a playtest the big questions were: "Did you ever feel you were killed when you shouldn't have been?" and "Did you feel you hit the other player when you thought you should have?"
The results showed that it felt good and snappy. Feelings are good, but did the telemetry data back up the results? To get some hard facts I implemented a simple viewer for my telemetry data where I can load and group each clients data to compare the results. What it actually does is creating spheres indicating the positions of victims and their killers. I then use a free camera to fly around and view the data. In future versions I will add the possibility of doing this on the actual level to get an even better view of the data. I also wrote some simple tools to calculate the distance between the positions to find out how big the difference was between the clients. It turned out that the biggest distance between where client A thought he was killed and where client B thought the same kill happened was 40cm (for characters running 6m/second). Considering the limitied amount of code written that is pretty awesome.
Naturally the results would look different in high latency scenarios. But then again any multiplayer game will behave differently in high latency scenarios.
Conclusion So what are the conclusions? Except that it would be so much nicer if people didn't cheat?
Did I learn something? The one thing I was most amazed by was the small amount of code needed to provide a full scale multiplayer experience. Sure it has the enormous drawback of being hacker friendly but used in the right context it certainly have its place. Due to the simplicity it's robust, easy to maintain, and extend when needed.
I was also pleased to see the simplicity of the code dealing with replicated clients (due to the absence of prediction/correction and by using the replay buffer) as virtually no simulation was needed. An extra bonus was that this lead to a very lightweight implementation both server side and client side.
Am I just old and grumpy because my number one game feature request for iOS is support for the full HID Bluetooth profile, i.e mouse and gamepad?
I understand and appreciate the idea of new innovative input schemes for games, but since I mostly play shooters it really annoys me when I don't get the same level of precision I'm used to. I use my Flings whenever possible, but not many games allows for free positioning of every input element. And not all games support two virtual joysticks.
Having mouse/gamepad support would also enable me to play as I'm used to; in my comfy sofa with the iPad connected to my big screen...
I can honestly see my iPad replace my consoles IF it would let me play the way I want. Fidelity wise I'm happy since I always prefer gameplay and feel to high poly count and post fx.
The same reasons apply to the Wii. I almost "never" use my Wii even though there are so many great IP's. When I do it's with my classic controller ;)
Oh well. I guess I'll stick with my "old" consoles for now ;) I just think it's a shame that developers are blocked out from using a mouse or gamepad.
Battlefield games have strong multiplayer legacy. Battlefield Play4Free is no exception. On the contrary, the game design chosen by our designers creates new challenges to be solved.
This post describes some of the changes done to the gameplay layer of the networking engine in order to improve performance.
Let's start with some background. The game basically works with 6 networkable types: vehicles, soldiers, projectiles, cameras, vehicle spawners and soldier spawners. This is only part of the truth as vehicles can consist of multiple network objects depending on vehicle type and its moving parts (turrets, flaps, springs, etc). Each part is represented with its own networked object. It's also worth pointing out that only a fraction of the projectiles are networked. As long as the client and server can simulate the projectile independent of each other there is no need to network anything. This is true for "normal" projectiles that travels in straight lines, detonating on impact. Examples of networked projectiles are grenades, TV-guided missiles and heat seeking missiles. We also have a third category, stickies, which are used for all kinds of things, but for this post the most important are medic boxes, ammo crates, and claymores, i.e. deployables.
Traditionally in Battlefield games, the selection of weapons available to the player was controlled by kits. Each kit contained a fixed set of weapons. This made it easy to calculate memory and network usage. But as one of the driving forces in the business model used in Battlefield Play4Free is to sell weapons, the kit contents is no longer a finite collection of weapons as the store is constantly updated and all weapons in the catalog are possible to be used. So instead of a handful weapons we could very well be looking at hundreds of weapons and gadgets.
From the game perspective this means that it needs to know about all weapons and any network objects they use. This gives us challenge number 1: How to deal with an ever growing collection of weapons? This is however outside of this post as it will deal with another challenge.
If challenge 1 is about dealing with possible weapons, challenge 2 is about the amount of active network objects. Design wise Battlefield Play4Free have increased the number of networked objects compared to BF2/142 by adding more vehicles to the maps (at the time of writing this we have approx 50% more vehicles added to Oman than the original). The game also allows more medic boxes, ammo crates and other gadgets to exist simultaneously in the world compared to BF2/142 adding more objects needed to be networked.
These design changes don't manifest themselves as an increase in bandwidth as we do a good job in selecting, compressing and predicting network data. The hit is taken in CPU performance, especially on the server side. This post isn't about the networking engine as such, but it's important to know that the server holds a database containing all active network objects aka ghosts for each player. So if we have 180 active network objects, the server needs to serialize approx 5700 ghosts for 32 players, i.e. every single network object might exist as 32 ghosts, which makes us want to keep the number of network objects down to a minimum. It's the serialization process that is time consuming. Unlike a game client where we can pretty much utilize the full CPU capacity, the server needs to be as lightweight as possible to allow as many simultaneous game servers to run on a host as possible. The more game servers we can run, the less we need to spend on hosting costs. For a free game this is important.
Challenge 2 have normally been handled by limiting the amount of vehicles allowed on a map and by limiting the amount of networked projectiles available on a map. And even though the initial thought in certain peoples heads was to do the same this time around I really wanted to improve the tech to cope with the design instead. It's not the DICE way to settle for the second best ;)
The solution was found by studying the characteristics of the ghosts, and as it turned out the majority of the objects the ghosts represents were simply non-moving objects. The reason for this is simple that many players grab a vehicle, drive to a capture point, jump out and starts fighting leaving the vehicle standing. Also vehicle spawners are often located at spawn points leaving many vehicle standing waiting for someone to turn up and start using them. Besides non-moving vehicles players throws a lot of medic boxes, ammo crates, claymores, etc on the ground as part of the gameplay, and they are even less mobile. Once they landed on the ground they won't ever move again.
So why are non-moving objects a problem? Before ghosts are to be serialized over the network, we put them through a scoping process that goes through all ghosts objects and sort them based on priority. The priority is based on the type of object it is, how far away they are from the active player, what speed they are moving with etc. This is done to assure that important objects near you are updated often enough to appear and behave as you expect. The scoping process also assures that all objects will be serialized over the network at some point. So even if a vehicle is standing still it's still serialized (even though we won't send much data as it isn't moving).
Enter culling. What if we could rule out the non-moving objects from the equation? That would save a lot of CPU time.
The first approach used to test the theory was to implement a very brute force backface culling mechanism, i.e non-moving network objects behind the player weren't networked. The good thing about the approach was that it was serverside only so a new server could be deploy to the production environment and be profiled using real players. The bad thing was the objects would pop in and out as the player rotated due to objects being removed on the client as the server stopped network them. Also shadows from objects behind the player would be removed. However the results clearly showed that less ghosts gave us the gain in performance we wanted (25-75% less CPU usage).
The second approach aimed att solving the object pop:ing was to reduce the frequency of which the non-moving objects would be serialized to an absolute minimum, i.e. they wouldn't be part of the expensive serialization process except to keep the objects alive and visible on the client. This fixed the popping side effects from the hard culling, but didn't give us the same small CPU usage as the first approach. Only approx 25% less CPU usage.
The third and final approach combines the two previous approaches picking the best of two worlds (with some tradeoffs in performance of course). The solution works by putting idle objects into three zones:
Close by objects, i.e. objects close to the player (typically 50 meters). These are given super low priority. Just enough for them to be networked every once in a while to keep them visible and active.
Objects behind the player. These are objects that are well behind the player (beyond 50 meters). These are culled away all together and stands for the majority of the performance gain as most idle objects are culled away.
Soon to be visible objects. These are objects that are behind the player, but close to the culling line. These are given low priority. High enough for them to be visible but low enough to still not be updated at full rate.
The end result is a CPU gain similar to the the first approach. The big difference is that the CPU have longer "spikes" (maybe not spikes. more like hills) as more objects get serialized as players get into hot fighting zones on the map compared to the first approach. There are still some visual artifacts present on high latency connections, i.e. objects far away popping when twitching the mouse, but all in all the culling works really well. I also believe we can minimize the popping by fine tuning the zone boundaries.
And thats's where we are! Testing, tuning, refining until we feel it's ready to be deployed. But that's a completely different story.