Jump to content

Recommended Posts

3 hours ago, ToiDiaeRaRIsuOy said:

Do I feel 28 average FPS being the bare minimum enjoyable? Yeah certainly. Do I find it annoying the FPS shoots up 40FPS and therefore create an uneven performance and experience? Not really. It would only annoy me if the average lowpoint would be that low. I think you roughly need 24 FPS (depends on who you are asking really) to keep your eyes from being able to distinguish individual frames rather than the illusion of fluid movement. 20 FPS can still be seen as the latter if you are used to it, but if you ask someone who is being used to 60 FPS watch something that is below 24 FPS, he might be able to see the actual individual frames.

(I apologize if I don't fully understand your word usage here, English is not my first language.) I think that FPS consistency is much more important then a lot of people realize. I do a vast majority of my gaming at 120 fps, I am so adjusted to this higher frame rate that going as lower then like 50  actually gives me motion sickness. I happen to own other gaming consoles and when I decide to play something that is framerate locked to 30, It takes me a good few days to get adjusted and feel comfortable playing it. The key point is that I DO adjust. On the other hand, I have had games on console that strive for 60fps and instead jump back and forth from 45-60. THIS is a problem. No matter how much time I try to adjust, It makes the games absolutely unplayable for me. 

I had the idea for a little experiment for this. I took my husband (who does almost all of his gaming at 30fps) and had him play a game running at 60 for a couple hours. I then framerate locked it down to 30, and he couldn't tell the difference at all. I then messed with some settings and had the game run a variable framerate between 45-60. Once this was done he really began to notice it. He said: "it felt like the game was really jerky or his controls had some lag or something."

I think a very simple "fix"(band-aid solution, but easy to do and better then nothing.) would be to simply add an fps limit option for late game players. 

4 hours ago, ToiDiaeRaRIsuOy said:

For the record, I did overclock my cpu. So it's not the standard value! That's why I am asking to post the single core performance as well. The 3 on the right might have have been significantly overclocked.

I could be mistaken, but I think I had the highest performance numbers. I have done NO overclocking. Though I do think I had the most modern CPU so maybe architecture has something to do with it as well? 

Just wanted to point that out. 

 

TL;DR:

I am of the opinion that Adding an FPS limit option could really help with late game playability. 

 

This might be common knowledge already, but I don't think resolution should make much of a difference to anyone. I did my benchmark at 3440x1440, but out of curiosity I just connected my rig to a 4k screen, and got basically the same results (limited down to 60fps though, as my 4k screen can go no higher.) I then changed the game resolution down to 1920x1080 and saw virtually no performance difference. 

Link to comment
Share on other sites

3 hours ago, DustFireSky said:

Till then I am dead. :D

Well, let's hope that expectations of your demise are greatly exaggerated ;)

45 minutes ago, Gracefulmuse said:

TL;DR:

I am of the opinion that Adding an FPS limit option could really help with late game playability. 

Interesting point, also your experiments. Definitely be worth looking into. 

Also, I see no problem with your command of English. But it is not my first language either ;)

Link to comment
Share on other sites

1 hour ago, Gracefulmuse said:

is might be common knowledge already, but I don't think resolution should make much of a difference to anyone. I did my benchmark at 3440x1440, but out of curiosity I just connected my rig to a 4k screen, and got basically the same results (limited down to 60fps though, as my 4k screen can go no higher.) I then changed the game resolution down to 1920x1080 and saw virtually no performance difference. 

I did the same thing, 1440p down to 720p and there was no difference in performance. 

I guess the whole map needs to be updated even if it isn't visible. Kinda makes sense...

Link to comment
Share on other sites

Hardware:

CPU: i7 4770k OC @ 4.2 GHz all cores
GPU: GTX 1070
RAM: 16 GB DDR3 1866
Storage: Samsung Evo 850 500GB SSD
Motherboard: MSI Z87-G41 MC-Mate (MS-7850)
Resolution: 2560  x 1440 @ 144 Hz (Fullscreen 1.0 UI scale)

Loading time:

1 min 00 sec

Average FPS:

38 FPS (max normal zoomed out to the center of the home base @ 3x speed)

Thoughts:

My current active game is already down to 28 FPS at cycle 450. I have 16 dupes instead of 8, but even if I limit them all to the home base, I'll get maybe 29-30 FPS. I think it has to do with all the liquid/gas pipes I am running for temp control and industry. 

I really hope Klei does something about the game performance. I've brought the game since it first entered early access, but stop playing around Cosmetics update for a year or so because I just can't stand the sluggish performance for large bases. It takes a lot of fun out of the game when your base design resolves around designing for game performance.

Link to comment
Share on other sites

AMD Ryzen 5 1400 3.20 GHz
8 GB DDR4 2400
1 TB 7200 RPM        am I the only old fart not using solid state?
AMD Radeon RX 560 4 GB GDDR5
Win 10 64 bt

1920x1080 @59Hz, Fullscreen
Chrome and Windows running in background eating half my ram. 

Loading time:
1 min 52 seconds   yay not solid state!

FPS:
paused and 1x:
Zoomed all the way in, 41-45 depending on what was on screen
Zoomed all the way out: 22  (modded; entire asteroid on screen and running)

3x:  zoom in: 26-29
zoom out: 17

10x: zoom in: 19-21
zoom out: 11-12

Was using bigger camera zoom out and speed control mods (sorry, didn't take the time to test at 2x as well). Saw huge fps differences based on not only how much was on screen, but whether it was active with animations. But even at 10x speed, zoomed out enough that I was at 13fps, never saw any of the extreme lag when scrolling that the game had pre-pathfinding multithreading. Felt sluggish when scrolling or zooming, but fps always jumped back up when not moving around. What this game needs is a minimap. Playing with this base and the whole map revealed it really felt like half of the lag could be eliminated with a click map to move camera instead of processing every little thing on the way to it. 

Link to comment
Share on other sites

I got some ancient hardware.

 

Intel i7 920 2.66ghz overclocked to 4.2ghz (200.5mhz x 21) 45nm
DDR3 PC310700H (667mhz) 12GB (6x2GB) running at 800mhz CAS 7 FSB:DRAM 2:8 (NB 3600mhz) [info from cpuz]
GTX 1070 8gb
Western Digital WD3200BEKT 7200rpm - lol

66 second load time - loading screen time. stopped once loading screen went away. game still loads for a few seconds after while it populates resources.
6 second save time

 

35-40fps @ 1680x1050 
17fps when zoomed all the way out

 

cycle 1918 took 1m 50s to elapse timed from after save completed to when next save started. running at debug super speed (alt-z) average 25fps (at load in screen)



 

Link to comment
Share on other sites

Amd Fx 8370 4.0Ghz overlocker to 4.8 Ghz

16 GB DDR3
SSD 
Nvidia GeForce 1070 Gtx
Win 10 64Bit

Loading Time : 1 min 20 sec
FPS speed x3: 19-21
FPS speed x2: 25
FPS speed x1: 27
 I look FPS from other intel processor and i think that the game interacts badly with AMD processor

Link to comment
Share on other sites

11 hours ago, joelzhl said:

(max normal zoomed out to the center of the home base @ 3x speed)

10 hours ago, bmilohill said:

(modded; entire asteroid on screen and running)

Personally I didn't scroll at all since I figured if I scroll, I would end up in a location nobody else would end up in. The standard would be to not move around. We clearly do not measure the same, which means the measurements are nearly worthless.

Mods vs unmodded is another issue. We can't assume mods to not have an effect at all. Most likely will not, but we should remove mods just to be sure.

We should also investigate if the time of the cycle matters. If one slowdown is the dupe pathfinding, the framerate will be higher when they are asleep.

 

Somebody should write a guide on how to make the measurements. How to back up the mod list, reset it, set resolution to 1080p, set the screen at a specific location (like pressing H) and then zoom out completely. Instructions on how to use fraps as it actually calculates min/max/average fps.

Reporting should also be a standard. CPU name and frequency as well as memory, frequency and timing. Obviously you have to skip information you don't have like it's not everybody who knows the memory timing of their computers. It's usually hidden somewhere in the bios setup.

 

11 hours ago, joelzhl said:

Hardware:

CPU: i7 4770k OC @ 4.2 GHz all cores
RAM: 16 GB DDR3 1866

Average FPS:

38 FPS (max normal zoomed out to the center of the home base @ 3x speed)

I decided to downclock and replicate that one as close as possible.

CPU i7 4790k @ 4.2 GHz all cores

RAM 16 GB DDR3 2933

I hit H and zoomed out completely (4k resolution, 3x speed)

Avg: 43.350 - Min: 31 - Max: 47

The major difference is going from 1866 to 2933 MHz RAM and framerate increased by 13%

Increasing the CPU to 4.4 GHz adds 2 fps to the average.

Link to comment
Share on other sites

Hardware:

 

Windows 10

CPU: AMD Ryzen 2200 G

GPU: 1050 ti mini (factory overclocked)

RAM: 8GB

Harddrive: Toshiba 1TB HDD

Mainboard: GIGABYTE AB350M-DS3H

Resulution: 1920 x 1800

 

FPS:

on main base

min: 25

max: 28

average: 27

 

loading/saving:

loading the save: 78.73 sec

daily save: ~5

 

Note:

The CPU isn't overclocked.

Link to comment
Share on other sites

6 hours ago, Nightinggale said:

Reporting should also be a standard

9 times out of 10 I agree with all of this. Last night was busy with Friday night things, hence the silly mod/scroll/zoom benchmarking (though I do think the zoom and scroll issue, showing that performance of what is being calculated by the game vs what the graphics card is displaying is a separate issue that we should also look into) . Should be able to get more standard numbers with my AMD setup tomorrow. 

Link to comment
Share on other sites

On 5/10/2019 at 7:59 AM, Nightinggale said:

 We don't have a clear line, indicating that the bottleneck isn't the CPU core speed. Instead it is fairly clear that most systems hit a wall around 30 fps.

Many systems are set to synchronize the GPU with the screen refresh, and almost all current displays (in the US) run at 60Hz.  Because of this, you'll see spikes around 60 and 30 FPS.  Anything faster than 60 will get sent to the display synchronized with the display's 60Hz refresh rate.  And frame rates near 30 get rounded to 30 for the same reason.

15 hours ago, akrabat14 said:

Amd Fx 8370 4.0Ghz overlocker to 4.8 Ghz

<snip>
 I look FPS from other intel processor and i think that the game interacts badly with AMD processor

Intel processors are highly optimized for mathematical operations -- which ONI does a LOT of.  It does not surprise me that the Intel is performing better in this particular case.

Link to comment
Share on other sites

Hardware:

CPU: Ryzen 2700x 3.70GHz (no OC)
GPU: GTX 1070ti 8GB
RAM: 16 GB DDR4 3200
Storage: SSD samsung 860 EVO 500 gb
Motherboard: msi b350m gaming pro

resolution: 1920x1080 @59Hz, Fullscreen

Loading: 58s

FPS(x3):Frames: 2595 - Time: 60000ms - Avg: 43.250 - Min: 35 - Max: 51 moving around the base and scrolling

Link to comment
Share on other sites

I did some testing based on benefits from overclocking. Still running a 4790k with 2933 MHz RAM. Set resolution to 1080p, pressed H and zoomed out just prior to testing each time.

4 GHz: Avg: 41.783 - Min: 30 - Max: 45

4.7 GHz: Avg: 47.317 - Min: 35 - Max: 51

Even though I only measured 30% CPU load on the highest loaded core, the CPU core speed does indeed matter.

Link to comment
Share on other sites

1 hour ago, Nightinggale said:

Even though I only measured 30% CPU load on the highest loaded core, the CPU core speed does indeed matter.

Most modern CPUs rotate a thread through their various cores, when those threads are changing tasks, to reduce the problem of overheating.  30% load sounds about right for an i7.  Here's a screenshot from my i5, and I'm holding at about 45% with 4 cores.

image.thumb.png.538c7a26ab65528c926b21380efd512c.png

The big spike, where CPU2 (red) holds 100% use for about 8 seconds is where I'm loading ONI. Its a single task (loading the file into RAM), so the CPU can't rotate it between cores until its completed.  Once it is, you see that everything starts to normalize and all of the cores get utilized even though ONI is a single-thread application.

image.png.7642f2fbcd0af87932f0dbaff8085fb7.png

 

Link to comment
Share on other sites

18 minutes ago, camelot said:

My friends, new AMD CPUs are amazing. Hopefully when launched at July, ONI would run much smoother on AMD CPUs

We need to test this. If you sponsor the hardware, I'm willing to do the needed benchmarks to see how it compares to Intel CPUs ;)

Link to comment
Share on other sites

46 minutes ago, Nightinggale said:

We need to test this. If you sponsor the hardware, I'm willing to do the needed benchmarks to see how it compares to Intel CPUs ;)

On a completely unrelated note, I will likely get an Ryzen 7 3800X, and I will be happy to provide some benchmarks. I will wait for a month or so after official availability though, just to be sure that there are no major issues.

Link to comment
Share on other sites

I've been playing ONI for a long time with a 1700 cycle save of 19MB in size, and just bought a new laptop, I would like to share the performance differences here.

 

My old latop, MSI gs60:

Intel 6700HQ

Nvidia 970M 3GB

16GM Memory at 2133MHz

loading takes 70 sec and running at 2X speed gives FPS around 15 ~ 25

 

My new laptop, MSI GS75:

Intel 9750H

Nvidia 2070 Max-Q 8GB

16GB Memory at 2666MHz

Loading takes 50sec and running at 2X speed gives FPS around 40 ~ 50

Link to comment
Share on other sites

There seem to be a new AMD hype lately and for good reason. Interesting stuff is going on in the CPU world at the moment.

What is less talked about is how such CPUs would handle ONI. High single core performance is good. However I'm actually expecting the "game cache" to be interesting. The level 3 cache is memory on the CPU, which is a copy from RAM. When the CPU reads some RAM, it's kept in the level 3 cache and if the memory is needed again, it can be fetched from the lv3 cache for much faster access. A number of us has around 1 MB for each thread meaning the up to 64 MB on a Zen 2 is a massive increase. (don't know why the promotion video says 72 when no CPU has more than 64)

What does this mean for ONI? Well if the game loops through all cells to update temperature, it will read the element of the cell and then read the data for the element (thermal conductivity and capacity). If you have 64 MB, then there is a chance the CPU have decided to keep all the elements, meaning it can look up any element fast. If you have just say 8 MB, the risk that it has overwritten the element in the cache is much greater, meaning the CPU will have to read it from memory again.

Time will tell how well Zen 2 will play ONI, but so far it looks really promising. This might actually be the first AMD CPU, which has a chance of beating Intel in ONI performance. Now we just need to wait for somebody here to actually buy one and test it.

 

Link to comment
Share on other sites

2 hours ago, Nightinggale said:

(don't know why the promotion video says 72 when no CPU has more than 64)

That is total cache, including L2 and L1 (but probably not TLB, too small). I will get one of these, but likely 8 cores only. 

 

Link to comment
Share on other sites

9 hours ago, Gurgel said:

That is total cache, including L2 and L1 (but probably not TLB, too small).

64 MB lv3 + 16*0.5 MB lv2 = 72 MB

So marketing decided to call 64 MB for 72 MB. That's confusing at best, but maybe even downright misleading. The level 2 cache contains copies of data from the level 3 cache meaning it's impossible to store more more than 64 MB in those 72 MB.

I did some further digging and it turns out that AMD has made something they call CCX (Core Complexes). Each consist of 4 cores and 16 MB shared level 3 cache. There are 2 CCX components on each CCD (Core Chiplet Die) and up to 2 CCDs. 12 core CPUs disables 2 cores in each CCD (1 core in each CCX?). It's a great design for production because it reduces waste from production errors (which is common with something this small). They can discard a CCD instead of a whole CPU and if a core is broken, they can disable it and turn it into a CCD for 12 core CPUs. This helps keep the cost down.

The question is what it means for ONI. Does it mean that one CCX can only access 16 MB lv3 cache? Can it access the other CCX's lv3 cache on the same CCD? What about the other CCD? If it can, what about latency?

What if two CCXs have the same memory in the lv3 cache? Will it create latency issues when it tries to avoid race conditions? Will the 8 core CPU be faster than the 12 core CPU because it only has one CCD, hence no need for CCDs to work together regarding memory interface/cache?

I feel like the more I look into this, the more questions I have. It looks to me like the chip design is mostly optimized for independent tasks running in parallel, like playing and streaming at the same time. If the streaming runs on a different CCX or ideally different CCD, then streaming will have less of an impact on gaming even when the game is CPU throttled.

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9Q

Link to comment
Share on other sites

On 2019/5/10 at 3:20 AM, DustFireSky said:

... Interesting. More Dupes are scaling better as several versions before.

  • 24 Dupes = 25FPS
  • 30 Dupes = 22FPS
  • 40 Dupes = 18FPS
  • 50 Dupes = 16FPS

If they optimize the game a little bit that 40 Dupes are always over 30 FPS, all were good.

also wish they could

but that probably need to use a whole new algorithm (with a better big-O time complexity)

if they are already using the best, that would be sad.

Link to comment
Share on other sites

Playing on the Laptop

CPU: i7-4700HQ CPU @ 2.40GHz
GPU:  NVIDIA GeForce GTX 860M 2Gb
RAM: 16Gb
1920x1080 Fullscreen


Load time: 71 second. FPS: 25 for most of areas, but occasions freezes and drops to 5 for a couple of seconds
Dev Build:

For me the bottleneck in the CPU - both the RAM and VRAM are not at limit

Link to comment
Share on other sites

Hardware: 9900k@5ghz/1080ti/32gb ram/860evo

Resolution: 2560  x 1440 @ 60 Hz (Fullscreen 1.2 UI scale)

ONI: Testing build 353289

Loading time: 38 sec.

Low-medium speed: 60 fps stable.

Fast speed: 55 fps average with drops to 40.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

Please be aware that the content of this thread may be outdated and no longer applicable.

×
  • Create New...