Jump to content

Game performance: am I doing something wrong?


Recommended Posts

I've bought the game a few weeks back and my god am I hooked. To Klei; Thank you for designing such an amazing game. I love it!

Now on to my problem;

The range of bugs in this game, though at times frustrating, I can live with, it's a complex sandbox and has only just been released...
However... I'm facing a very poor frame rate of around ~20 FPS on my 6 month old, high end, gaming PC and it's frustrating.
I'm in cycle 400-500 now and expanding into space (did a fair bit of automation to ensure I was long term sustainable before going up).

Should I accept this performance as the norm? This might be comparing apples to oranges but I play factorio regularly which somehow reminds me of ONI (2D and complex simulation) but factorio is lightyears better when it comes to frame rate, freezes (on save) and engine performance on end game maps.
Did they simply pick the wrong engine/platform to build a game this complex?
Am I doing something on my map which is likely strongly affecting my frame rate?
Are there large plans in the pipeline which are expected to drastically improve performance?

I understand they are working on DLC right now, which I would look forward to, but right now adding more code/complexity to the game in its current state (performance wise) makes me rather worried then excited.

Link to comment
Share on other sites

That is just how it runs. There seems to be an additional issue that many Intel rigs add some kind of stutter or jitter that makes the problem worse.

I have a new Ryzen 3600X and only get 15-20FPS on a (very late game) complex base, but it is nicely playable except for relatively rare lockups of 10-20 seconds (probably some kind of garbage collection). Before my current computer, I uses and AMD FX8350 and again, the game was nicely playable at 15-20FPS. Others on Intel have reported the game basically unplayable at 20FPS. 

This Intel vs. AMD effect was also noted by some game-streamers when the first Zen CPUs came out: On AMD you apparently can stream smoothly from the same PC, while on Intel you apparently may needed to use a 2nd PC and do video-capturing there to get smooth streaming.  

 

Link to comment
Share on other sites

- try limiting amount of possible tasks for dupes. Mainly sweep everything from the floors and then lock the doors on the storage so no dupe can access them

- try helping the routing by disabling/not using jet suits, making every place accessible by only 1 route.

- try limiting farmed critters from being able to move too much

 

I sincerely hope they optimize for performance so we will not have to spend time optimizing our bases around this concept

Link to comment
Share on other sites

7 hours ago, Croz said:

Did they simply pick the wrong engine/platform to build a game this complex?

I believe Oxygen not Included uses the Unity editor which uses C# as language.

C# typically doesn't reach the same performance as C or C++. On the other hand, this choice made it easier to focus on adding many interesting features to the game.

Factorio is written in C or C++ I believe. No doubt that they put in an impressive effort into optimizing it but they're also not simulating gases and temperature.

 

Link to comment
Share on other sites

@MorsDux Thinnin out my large spare critter population helped (had like a 100 not in a farm for a rainy day).

I'll spend some cycles sweeping now (so many storage bins...)

@DarkMoge Thank you! I'll have a look!

@kerosene I'm also in the world of software development and had similar assumptions.

The team from factorio did seem to have gone the extra mile of using a "lower level language" giving them more control over hardware etc. I hope they still have enough room to diagnose and optimize the game despite (?) using Unity.

edit: typos

Link to comment
Share on other sites

6 hours ago, Gurgel said:

That is just how it runs. There seems to be an additional issue that many Intel rigs add some kind of stutter or jitter that makes the problem worse.

I have a new Ryzen 3600X and only get 15-20FPS on a (very late game) complex base, but it is nicely playable except for relatively rare lockups of 10-20 seconds (probably some kind of garbage collection). Before my current computer, I uses and AMD FX8350 and again, the game was nicely playable at 15-20FPS. Others on Intel have reported the game basically unplayable at 20FPS. 

This Intel vs. AMD effect was also noted by some game-streamers when the first Zen CPUs came out: On AMD you apparently can stream smoothly from the same PC, while on Intel you apparently may needed to use a 2nd PC and do video-capturing there to get smooth streaming.  

 

  As far as I'm aware, that's a combination of AMD's infinity fabric boosting available bandwidth while lowering core-to-core and core-to-memory latency, along with AMD's generally better multi-core performance.   Up until this generation though, single core performance left a lot to be desired, as I'm sure you're well aware.
 
Not sure if this is useful, but for comparison, I'm running an older AMD FX-9590, 8GB system memory, Solid state HD, and an R9 NANO ... Cycle 900 and I'm pushing 17-20 fps with the simulation running, only major hitch is the morning snapshot save/task recalculation, where I go from FPS to SPF for 2-3 seconds.  Load times getting back into that game are a bit... long.  Figure 30ish seconds.  Hoping to upgrade to either the 3950x or a 3rd Gen Threadripper around February timeframe.

 It does make me wonder if they can boost game load times by selectively pulling gamesave data instead of grabbing the whole thing at once, much like texture streaming in the regard of grabbing only the stuff you need up front and delaying what you what you might need soon, and not loading the stuff you hard-don't at all.  Say, like buffering the report building and the snapshot data so it's available 10-15 seconds after launch instead of up front.  I'm fairly certain those two take up a non-trivial amount of the initial data-pull and sort-calculations on game_init.

47 minutes ago, kerosene said:

I believe Oxygen not Included uses the Unity editor which uses C# as language.

C# typically doesn't reach the same performance as C or C++. On the other hand, this choice made it easier to focus on adding many interesting features to the game.

Factorio is written in C or C++ I believe. No doubt that they put in an impressive effort into optimizing it but they're also not simulating gases and temperature.

 

 I wouldn't be surprised if some of Factorio's code segments are put together as shellcode/assembly by hand, specifically the ones that get called *all* the time.  Been my experience, using C/C++ for the program main structure and assembly for the highest used function calls nets the absolute best performance.

 

Link to comment
Share on other sites

28 minutes ago, kerosene said:

I believe Oxygen not Included uses the Unity editor which uses C# as language.

C# typically doesn't reach the same performance as C or C++. On the other hand, this choice made it easier to focus on adding many interesting features to the game.

Factorio is written in C or C++ I believe. No doubt that they put in an impressive effort into optimizing it but they're also not simulating gases and temperature.

 

ONI is definitely developed in Unity, but the simulation calculations occur inside a specially compiled library which was written in C++. That library was compiled to optimized native machine code, exactly the same as any other C++ app would be. 

There would be no performance gain from switching the rendering engine over to a custom one written in C++ because the core components of the Unity engine were already written in C++, C# is just the scripting language used in the editor and NOT the language Unity itself was built with.

Graphically, shaders do the majority of the rendering work and they're written in HLSL language which actually executes inside the video card and will always perform the same regardless of which engine was used to load it into the VRAM. The default shaders of some engines have more accurate lighting, or are maybe more performant than others out of the box. That has absolutely nothing to do with C# though, and you can just as easily use a custom shader in Unity as you could in any other engine.

Games don't perform badly just because they use Unity. If a game performs badly then it's either because the game was too ambitious for current level of technology available, or the developers lacked the skill required to realize their vision in a performant way. That applies to all the modern game engines, including the custom indie ones built from scratch.

Link to comment
Share on other sites

17 minutes ago, ZanthraSW said:

Does your FPS change depending on whether you are playing at 1x 2x or 3x speed?

Mine certainly does.  Given 1-2x feels like playing in molasses I tend to suck up the lower frame rate though.

 

15 minutes ago, Erasmus Crowley said:

ONI is definitely developed in Unity, but the simulation calculations occur inside a specially compiled library which was written in C++. That library was compiled to optimized native machine code, exactly the same as any other C++ app would be. 

There would be no performance gain from switching the rendering engine over to a custom one written in C++ because the core components of the Unity engine were already written in C++, C# is just the scripting language used in the editor and NOT the language Unity itself was built with.

Graphically, shaders do the majority of the rendering work and they're written in HLSL language which actually executes inside the video card and will always perform the same regardless of which engine was used to load it into the VRAM. The default shaders of some engines have more accurate lighting, or are maybe more performant than others out of the box. That has absolutely nothing to do with C# though, and you can just as easily use a custom shader in Unity as you could in any other engine.

Games don't perform badly just because they use Unity. If a game performs badly then it's either because the game was too ambitious for current level of technology available, or the developers lacked the skill required to realize their vision in a performant way. That applies to all the modern game engines, including the custom indie ones built from scratch.

 Worth noting that unless you have a jillion and one things on-screen, ONI is not a graphically taxing game... and usually when you have a jillion and one things on screen, the lag is from the CPU side for most people I'd wager. 

 Honestly, I think my own aversion to Unity is the number of indy/early access games I've played where the devs for said games either don't understand optimization or, like you said, were too ambitious.  After enough times, it's hard to not associate dismal performance with the engine, not the devs.

 My absolute fav though, was being told that a certain texture graphic bug was impossible to fix because "Microsoft's code can't handle blending more than 2 textures from adjacent voxels onto the voxel between them."   I mean... if he/his team wants to be lazy, at least they should be honest about it. A "Yeah, that bugs us but we haven't found a way where we can actually fix it yet because the code screwing up isn't ours" would have gone a long way.  Nothing's impossible to fix, there's just "We can't directly fix that ourselves without paying a crapton in licensing fees or hiring a programmer who's more competent than the Elbonians we currently have... and we don't have the budget for that."  Guess it's messaging/tone that got me.

Link to comment
Share on other sites

47 minutes ago, Croz said:

'll spend some cycles sweeping now (so many storage bins...)

Assign only ONE material to each Bin, dont mix them. (It would be great to have a way to destroy materials, Hatchets takes so much time)

Dont go above 18 dups, even less if possible.

Kill all the Wild Animals. (Including Puffs!)

Put locked Doors everywhere if you are not using that area.

The less paths the better, so try to use lots of Transit Tubes (Forgot the real name)

And finally, check this post:

 

Link to comment
Share on other sites

2 hours ago, Erasmus Crowley said:

ONI is definitely developed in Unity, but the simulation calculations occur inside a specially compiled library which was written in C++. That library was compiled to optimized native machine code, exactly the same as any other C++ app would be. 

There would be no performance gain from switching the rendering engine over to a custom one written in C++ because the core components of the Unity engine were already written in C++, C# is just the scripting language used in the editor and NOT the language Unity itself was built with.

The fact of the matter is simply that the simulation engine and the planning engine used by ONI take a lot of effort simply because of what they do. There may not actually be much room for optimization left. Maybe they can move some things out to additional cores bu putting them into their own threads, but that is hard to do.

2 hours ago, storm6436 said:

As far as I'm aware, that's a combination of AMD's infinity fabric boosting available bandwidth while lowering core-to-core and core-to-memory latency, along with AMD's generally better multi-core performance.   Up until this generation though, single core performance left a lot to be desired, as I'm sure you're well aware.

Actually, the single-core performance was never really that bad, the gaming world is just always in hysterics about performance. Fact of the matter is that you need 20% overall slower to even notice, unless you do a direct side-by-side comparison. The human mind is adaptable.

But yes, AMD single-core performance before Zen was significantly below what the fastest Intel systems could deliver. I was merely pointing out that AMD FPS and (some) Intel FPS deliver quite a different gaming experience for ONI. I have just gone from an FX8350 to an 3600X and the difference in speed for ONI feels pretty moderate. Of course, loading and saving has gotten a lot faster, but game performance was ok before. 

Link to comment
Share on other sites

This is an older tip that may no longer apply, but assuming it still works:

Any space that you are not using, fill it with solid tiles instead of vacuum, gas, or liquid.  Especially gases.  Gas movement is the most complex "background" task the game does.  By filling those unused spaces with solid tiles, the only thing the game has to process for that unused space is thermal transfer between individual tiles.  Simply put, solid tiles don't move.

Link to comment
Share on other sites

1 hour ago, Gurgel said:

Actually, the single-core performance was never really that bad, the gaming world is just always in hysterics about performance. Fact of the matter is that you need 20% overall slower to even notice, unless you do a direct side-by-side comparison. The human mind is adaptable.

But yes, AMD single-core performance before Zen was significantly below what the fastest Intel systems could deliver. I was merely pointing out that AMD FPS and (some) Intel FPS deliver quite a different gaming experience for ONI. I have just gone from an FX8350 to an 3600X and the difference in speed for ONI feels pretty moderate. Of course, loading and saving has gotten a lot faster, but game performance was ok before. 

  Well aware of all of the above.  It isn't that I'm an AMD fanboy so much as it is I hate Intel and their practices.  I bought my first computer when I was 12 using money I'd made from mowing lawns for two summers.  Was one of the 486 types AMD made back then.  That choice was purely economic.  I didn't exactly have a huge budge back in '91.  After that, once I started paying attention to everything... have you ever hated the market leader enough that you willingly bought a Cyrix processor? :D 

Link to comment
Share on other sites

14 minutes ago, storm6436 said:

  Well aware of all of the above.  It isn't that I'm an AMD fanboy so much as it is I hate Intel and their practices.

Same here. AMD does try to compete on (mostly) solid engineering. Nothing to hate there. But what Intel does ranges from slightly shady to outright criminal and I basically feel bad whenever I have a part in giving money to them (my laptop from work has an Intel CPU). And they are screwing over their customers whenever possible. That said, I never had a Cyrix CPU, but for my own systems I had AMD for a long, long time now and never felt I was getting a bad deal. 

 

Link to comment
Share on other sites

Intel was for a time behind AMD in top of the line gaming processors. Intel had run into trouble with their Pentium 4 processors trying to push clock speeds, and AMD swept in with some nice architectural design with their Athlon XP and later FX processors. Intel didn't really get the lead back until they pretty much gave up on the the Pentium 4 line and upgraded their mobile CPU line to the desktop Core and Core 2 lines (the Core processors were not generally available as desktop processors, but Core 2 was). Interestingly along side that transition, they also ended up adopting AMD's AMD64 instruction set instead of continuing with their own IA-64 instruction set which had no capability to run any legacy 32-bit code.

I feel that Intel is currently running into some trouble with the not once, not twice, but thrice delayed 10nm process, while AMD is moving ahead on TSMC 7nm and 5nm nodes. It does however always feel to me that AMD is a step behind Intel, except Intel occasionally stumbles pretty badly. It will be interesting to see if Intel comes out of this 10nm lull swinging, or if AMD is going to maintain their current advantage over the long term.

Link to comment
Share on other sites

1 hour ago, Gurgel said:

Same here. AMD does try to compete on (mostly) solid engineering. Nothing to hate there. But what Intel does ranges from slightly shady to outright criminal and I basically feel bad whenever I have a part in giving money to them (my laptop from work has an Intel CPU). And they are screwing over their customers whenever possible. That said, I never had a Cyrix CPU, but for my own systems I had AMD for a long, long time now and never felt I was getting a bad deal. 

 

 Yeah, that was right when Cyrix released the 6x86 and before AMD became a contender worth talking about.  Replaced it with one of the early Athlons a few years later, IIRC.  Been using AMD ever since then, upgrading about every 5 years or so to skip the marginal improvement generations.  I've been able to avoid Intel for any system I've bought.  The folks doing the buying when I worked for the DoD were unfortunately more than fond but *shrug* That's government for you.

  It's amazing how tech has changed.  Today, nobody in their right mind would be all "Hey, let's let's build a CPU with an on-chip GPU cluster... that uses main system memory instead of its own dedicated array.  Thankfully that wasn't one the model I picked up.  Even I thought that was particularly ill-advised.  Though, that was still back in the days of DOS 5 and 6... and if you wanted to run anything worth a darn, you juggled load statements and twiddled with HIMEM.SYS and EMM386 :p

1 hour ago, ZanthraSW said:

Intel was for a time behind AMD in top of the line gaming processors. Intel had run into trouble with their Pentium 4 processors trying to push clock speeds, and AMD swept in with some nice architectural design with their Athlon XP and later FX processors. Intel didn't really get the lead back until they pretty much gave up on the the Pentium 4 line and upgraded their mobile CPU line to the desktop Core and Core 2 lines (the Core processors were not generally available as desktop processors, but Core 2 was). Interestingly along side that transition, they also ended up adopting AMD's AMD64 instruction set instead of continuing with their own IA-64 instruction set which had no capability to run any legacy 32-bit code.

I feel that Intel is currently running into some trouble with the not once, not twice, but thrice delayed 10nm process, while AMD is moving ahead on TSMC 7nm and 5nm nodes. It does however always feel to me that AMD is a step behind Intel, except Intel occasionally stumbles pretty badly. It will be interesting to see if Intel comes out of this 10nm lull swinging, or if AMD is going to maintain their current advantage over the long term.

Yep.  I cannot describe the shadenfreude I had over the I-tanic.  Gotta be embarrassing when your chief competitor who's market cap is tiny compared to yours is all "Hey, I know you guys are having a lot of trouble with 64bit processing.  *drops licensing contracts on desk*  We got x86-64 working just fine.  I'm curious how many +s they're going to tack on to the 10nm node.  Aren't they up to 4 now?  That moment when even C++ programmers are like 'Bro, lay off the plusses man.  You've gone too far."  But at least it's not 10nm#  :D

Link to comment
Share on other sites

22 hours ago, Gurgel said:

There may not actually be much room for optimization left.

When I build a new pipe and I see the flow of liquid "freak out" in a different pipe system not even connected to the new one I built, I tend to suspect some optimization can be added.  Then again,  I'm not familiar with their source code so, who knows.

Link to comment
Share on other sites

On 9/4/2019 at 3:51 PM, Mastermindx said:

When I build a new pipe and I see the flow of liquid "freak out" in a different pipe system not even connected to the new one I built, I tend to suspect some optimization can be added.  Then again,  I'm not familiar with their source code so, who knows.

 Depends on what you mean by "freak out" ... the jerking everyone gets eventually is a consequence of having to pause the pipe simulation  to rebuild flow direction, etc.  Now, adding pipes in area Y makes the flow in *just* area Z get funky (in a non jerky way) does suggest either some algorithmic deficiency where it's confusing location data on read or write (or both) or ... well, there's no way to tell for certain on the user end since you don't know what's actually running under the hood... just that when you press the pedal it goes vrroom. :p

 

Link to comment
Share on other sites

On 9/4/2019 at 10:51 PM, Mastermindx said:

When I build a new pipe and I see the flow of liquid "freak out" in a different pipe system not even connected to the new one I built, I tend to suspect some optimization can be added.  Then again,  I'm not familiar with their source code so, who knows.

We will find out. In the end only Klei can really know where there is still room for improvement. They can probably do at least some things that a lot of CPU, but runs faster when you have many of them (and gets disabled otherwise).

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

Please be aware that the content of this thread may be outdated and no longer applicable.

×
  • Create New...