Recommended Posts

bumbaclad    9
1 minute ago, gabberworld said:

if you not use multi threat then it may hit in some point to the cpu limit what means calculation freeze and there comes also fps drop, you cant calculate more than cpu can todo at same time. thats where comes multi threat what todo calculations , loops in separate mode.

 

In doing what I speak of the calculations times will no longer be any type of perceived bottleneck.

Also I guess going back to cpu multithreading it is not as simple as you think. Things must stay in sync and also they must send, calculate, return then apply. The act of multithreading could actually lead to more hangups because of this.

Share this post


Link to post
Share on other sites
gabberworld    13
39 minutes ago, bumbaclad said:

In doing what I speak of the calculations times will no longer be any type of perceived bottleneck.

Also I guess going back to cpu multithreading it is not as simple as you think. Things must stay in sync and also they must send, calculate, return then apply. The act of multithreading could actually lead to more hangups because of this.

i been made multi threat apps from scratch at c++, i know what im talking

not the modding but actually real applications.

also noone tell that it is easy. but we not talking here also the impossible stuff

another thing is more cores you have then less you pc can todo at single threat , yes it is fast in some cases as it separates mainthreat to different cores automatically but it can't use and never will be use 100% off cpu. what means if he needs use the for loop for example million database info over and over, it will be at big trouble

Edited by gabberworld

Share this post


Link to post
Share on other sites
gabberworld    13

don't expect much tho that they actually change code what they already made years, it needs allot work for changes like this. usually multi threat stuff you start think from day 1 when you start develop something, adding that later is much harder

https://support.klei.com/hc/en-us/articles/360029880271-Oxygen-Not-Included-Minimum-System-Requirements

Edited by gabberworld

Share this post


Link to post
Share on other sites
bumbaclad    9
Quote

gabberworld i been made multi threat apps from scratch at c++, i know what im talking

I tried to stop replaying because you will not accept anything anyone else tells you but you keep editing and posting so I will now respond. You obviously do not know which is clear by numerous posts by you in this thread. Making a non-realtime application that utilizes multithreading is a much simpler concept. As I stated earlier in a game that is a different story. If we can agree that order of operations is a requirement in an application and you can understand that the thread will not return when you want it to and you cannot move on without it's contents and that it takes time to package send and receive those contents then you would understand the issue that is faced in making a game using multiple threads.

 

Quote

gabberworld yes it is fast in some cases as it separates mainthreat to different cores automatically but it can't use and never will be use 100% off cp

You would never want to use 100% of the cpu because you are running windows, steam, maybe a web browser; your computer needs the cpu to operate. And as I said earlier the cpu cannot even dream of running like the gpu can and therefor we must move as much burden from it as possible and putting calculations on another thread will not magically change that fact.

Share this post


Link to post
Share on other sites
gabberworld    13
15 minutes ago, bumbaclad said:

I tried to stop replaying because you will not accept anything anyone else tells you but you keep editing and posting so I will now respond. You obviously do not know which is clear by numerous posts by you in this thread. Making a non-realtime application that utilizes multithreading is a much simpler concept. As I stated earlier in a game that is a different story. If we can agree that order of operations is a requirement in an application and you can understand that the thread will not return when you want it to and you cannot move on without it's contents and that it takes time to package send and receive those contents then you would understand the issue that is faced in making a game using multiple threads.

 

You would never want to use 100% of the cpu because you are running windows, steam, maybe a web browser; your computer needs the cpu to operate. And as I said earlier the cpu cannot even dream of running like the gpu can and therefor we must move as much burden from it as possible and putting calculations on another thread will not magically change that fact.

you cant todo everything at gpu

Share this post


Link to post
Share on other sites
MinhPham    6
2 hours ago, TheMule said:

Not necessarily. If you have an external library that renders a very complex 3D scene, and it takes minutes, and you have this pseudocode:

import 3dlib

call 3dlib.RenderScene('scene.dat')

you can write that in C, C++, C#, LPC, Java, JavaScript,  Python, Lua, PHP, Pascal, whatever it doesn't impact the performance. That's because the all the CPU intensive stuff is outside the language.

And even if rendering the scene involves multiple steps, calling those steps from a higher level program doesn't affect the performance. That is as long as the driver program doesn't implement actual algorithms that manipulate the data.

It has been speculated in this thread that lag is caused by physics simulation code written in C#, which is not true.

But, if you are doing actual game calculation during render phase, and the calculation are done with C#, then it's a difference story ... do you know cycle time is a lot slower on low end computer, without a decent GPU ?

Share this post


Link to post
Share on other sites
gabberworld    13

by the way i can run this game  fine expect some minor issues there and there. i can life with those. but it really makes me red bull if someone complain in 2d game that he have lag, 3d games are more complex.

 

Edited by gabberworld

Share this post


Link to post
Share on other sites
TheMule    239

 

1 hour ago, MinhPham said:

But, if you are doing actual game calculation during render phase, and the calculation are done with C#, then it's a difference story ...

"if"

 

1 hour ago, MinhPham said:

do you know cycle time is a lot slower on low end computer, without a decent GPU ?

"PeopIe keep asking me if I know about cycle times. Well I used to program a ZX Spectrum in assembly(**), the only way to get custom (bad) sound effects. All it had was a buzzer pretty much directly controlled by the CPU. You had to do the timing yourself, looking up cycles times of every machine instruction on paper, since there was no assembler that did that. When I started, actually, I had no assembler at all, I had to lookup the op codes on paper, and write the program as a long sequence of hex numbers. (*)

So yeah, I think I know about cycles on low end computers."

Let's say I know enough that it never crosses my mind that I know enough about how ONI works to think I can tell the devs at Klei how they should do their job.

There are people there that know the internals of their code much more in depth than I do, know how to develop a game such ONI much better than I do, and even if they spent a month in training me I wouldn't be sure I could offer any advice or critique. And that's with 38 years of experience in programming.

 

(*)There was no GPU, of course. The CPU had to update the video memory to do graphics at the same time it was playing a sound and stay in tune.

(**) Not professionally, unfortunately. Or fortunately since I was well underage at the time. So yes - although very badly and amateurishly - I actually did that.

  • Like 1

Share this post


Link to post
Share on other sites
ExEvolution    9

@gabberworld FYI, according to the Steam hardware survey, 20% of all steam users still only have 2 CPU cores. If they changed ONI to use more threads, 20% of steam users would likely be unable to play the game anymore, or their overall performance in the game would drop significantly because now threads need to compete for CPU time. A single CPU core can only process 1 thread at a time, so if you throw 2 threads onto a single core, you're going to experience a lot of context switching that you just don't want in a real time application.

ONI requires a dual core CPU to play, not a quad core, not an hexa core, and not an octo core. And because of that, single core performance of your CPU is king, ONI has 2 cores to play with.

 

If the simulation can't keep up with the speed you want to run the game at, you're going to experience lag. If you want to hit 60fps, all calculations in the simulation + render threads will need to be complete in 16.7 milliseconds, and the game processes so much data in just the physics alone that it often just can't keep up.

 

Physics are one of the most difficult things to calculate in real time, you're not just taking a single number and performing math on it, you're sampling every cell and calculating motion, temperature, pressure, etc with its neighbors.

 

Imagine you've got a map with 256x384 cells, and each of those cells needs to be processed by comparing it with 8 adjacent cells just to determine what the new temperature and pressure will be, and not just once, but as long as the simulation is running. And don't forget that there are multiple layers to each cell, you've got electrical, gas and liquid piping (which also do their own temperature calculations and pathfinding), background buildings and foreground buildings, and automation. Then throw in a couple dozen dupes and hundreds of critters pathfinding and you've got a recipe for an engine that is working at its limit

Share this post


Link to post
Share on other sites
LZ96    21
1 minute ago, ExEvolution said:

@gabberworld FYI, according to the Steam hardware survey, 20% of all steam users still only have 2 CPU cores. If they changed ONI to use more threads, 20% of steam users would likely be unable to play the game anymore, or their overall performance in the game would drop significantly because now threads need to compete for CPU time. A single CPU core can only process 1 thread at a time, so if you throw 2 threads onto a single core, you're going to experience a lot of context switching that you just don't want in a real time application.

ONI requires a dual core CPU to play, not a quad core, not an hexa core, and not an octo core. And because of that, single core performance of your CPU is king, ONI has 2 cores to play with.

 

If the simulation can't keep up with the speed you want to run the game at, you're going to experience lag. If you want to hit 60fps, all calculations in the simulation + render threads will need to be complete in 16.7 milliseconds, and the game processes so much data in just the physics alone that it often just can't keep up.

 

Physics are one of the most difficult things to calculate in real time, you're not just taking a single number and performing math on it, you're sampling every cell and calculating motion, temperature, pressure, etc with its neighbors.

 

Imagine you've got a map with 256x384 cells, and each of those cells needs to be processed by comparing it with 8 adjacent cells just to determine what the new temperature and pressure will be, and not just once, but as long as the simulation is running. And don't forget that there are multiple layers to each cell, you've got electrical, gas and liquid piping (which also do their own temperature calculations and pathfinding), background buildings and foreground buildings, and automation. Then throw in a couple dozen dupes and hundreds of critters pathfinding and you've got a recipe for an engine that is working at its limit

this guy knows what he's talking about

 

Share this post


Link to post
Share on other sites
gabberworld    13
3 hours ago, ExEvolution said:

@gabberworld FYI, according to the Steam hardware survey, 20% of all steam users still only have 2 CPU cores. If they changed ONI to use more threads, 20% of steam users would likely be unable to play the game anymore, or their overall performance in the game would drop significantly because now threads need to compete for CPU time. A single CPU core can only process 1 thread at a time, so if you throw 2 threads onto a single core, you're going to experience a lot of context switching that you just don't want in a real time application.

 

yeah ok i suppose you right most player base is 2 core users. what leads to problem that they actually not use minimum game  specs , at least those who complain about lag :)

--

anyway if we talk about threats usage for cpu then that is not issue at all for developers they can reduce or increase as many they want programmatically dependents of pc.

so i read that user who started complain about lag, what leads to fact that this game is too old for him because he have 6 core pc. i quess same experience you was able see at year 2000 when single threat game not liked 2 core pc

 

 

Edited by gabberworld

Share this post


Link to post
Share on other sites
KittenIsAGeek    1605
17 hours ago, bumbaclad said:

The majority of lag is likely due to draw calls. They need to just move the game over to shader chunks instead which is going to make things like modding much more complicated but the user will see a 50+% performance increase.

If the lag is caused by draw calls, then players with 1000+ cycle bases suffering lag would see a dramatic improvement if they lowered their graphics resolution.  Smaller screen area, fewer draw calls.  Unfortunately, they do not.  The graphics engine is not the root of the problem.

13 hours ago, ExEvolution said:

Physics are one of the most difficult things to calculate in real time, you're not just taking a single number and performing math on it, you're sampling every cell and calculating motion, temperature, pressure, etc with its neighbors.

Yes. This.   Because it isn't simply crunching a number, its pulling from lots of related nearby data that is continually evolving.  This is likely causing a bottleneck between the CPU and the RAM simply because of the amount of data that has to be transferred.

Share this post


Link to post
Share on other sites
TheMule    239
17 hours ago, ExEvolution said:

A single CPU core can only process 1 thread at a time, so if you throw 2 threads onto a single core, you're going to experience a lot of context switching that you just don't want in a real time application.

Well, technically a context switch is much more CPU intensive when switching among tasks (processes), than among thread inside the same task. There's very little to switch. It's true that at the CPU level, with superscalar architectures, it slows down everything, but at system level the cost is negligible compared to switching tasks. That's the point of having threads.

Anyway, it's kinda pointless because an application can create threads dynamically, based in the number of cores you have. It's possible that an application runs with 16 threads on a 16 cores CPU, and 2 threads on a 2 cores CPU.  Actually, it's rather common.

17 hours ago, LZ96 said:

each of those cells needs to be processed by comparing it with 8 adjacent cells

Correct. And there lies another problem. If you want the simulation to deterministic, you have to consider the cells ordered. That is, you start with one cell, then compute the adjacent cells in a specific order, and so on. It's not something that's easy to turn into a parallel algorithm, one than can use multiple thread effectively. It's pointless to have 128 threads if 127 of them are waiting on the result of the first one before they can begin processing.

Thanks to the work of a lot of smart people here, we know a lot about how ONI works. We're now able to create and control waterfalls, beads, only because someone studied how liquids interact. But, a lot of that depends on cell/elements behaving consistently. Meaning there's a specific order when updating them.

  • Like 1

Share this post


Link to post
Share on other sites
bumbaclad    9
6 hours ago, KittenIsAGeek said:

If the lag is caused by draw calls, then players with 1000+ cycle bases suffering lag would see a dramatic improvement if they lowered their graphics resolution.  Smaller screen area, fewer draw calls.  Unfortunately, they do not.  The graphics engine is not the root of the problem.

A lower resolution does not reduce the number of draw calls. You are implying the user's camera has more objects off screen that are being culled but that is splitting hairs and is not a direct correlation as they can get a mod that just zooms out more leading to exactly the same number of draw calls in which they would have increased performance with a lower resolution and exactly the same amount of draw calls.

Edited by bumbaclad

Share this post


Link to post
Share on other sites
KittenIsAGeek    1605
1 hour ago, bumbaclad said:

A lower resolution does not reduce the number of draw calls. You are implying the user's camera has more objects off screen that are being culled but that is splitting hairs and is not a direct correlation as they can get a mod that just zooms out more leading to exactly the same number of draw calls in which they would have increased performance with a lower resolution and exactly the same amount of draw calls.

I'm curious to see if that's the case.  Are the number of elements drawn the same between fully zoomed in and fully zoomed out using screenshot mode?  Are we really rendering elements on the entire map?  I can't really tell, since my GPU is so incredibly under-utilized by ONI that it often sits around and ponders the meaning of its own existence. 

I do know that revealing the entire map with debug does result in a CPU performance hit.  It doesn't change my GPU load any that I can tell.  Does that mean that an unrevealed map still has draw calls to everything even though they aren't visible?  

 

I've built CPUs.  I know what the hardware at that level does, and I know their points of failure.  I know that ONI's thermal engine pushes up against the limits of an x86-type CPU because of how data retrieval from memory works.  I can theorize how to reduce the problem, then I can go into my game, implement the changes, and immediately see results.  Through direct experimentation, I've become convinced that this is the root of the problem.

I haven't worked as much with GPUs, so all I can go off of is what I observe using the statistics that my computer will show me.  This isn't nearly as much information as I would like, and any test I can think of doesn't result in any changes I can track.  This suggests that either my tests are faulty, or the data I'm getting about my GPU is irrelevant to the problem.  Maybe I'm way off-base and don't have a clue what I'm talking about and it IS the draw calls -- I don't have a way of determining that with the tools I have.

However, I do know how CPUs work.  I do know how to construct tests to operate the CPU in a manner similar to what I suspect goes on in ONI's thermal engine.  I do know that the results I get agree with my hypothesis -- so I'm inclined to suspect that I am right.

Anyway. I'm done with this topic.  Its starting to feel like I'm banging my head against a brick wall.

Share this post


Link to post
Share on other sites
bumbaclad    9
9 hours ago, KittenIsAGeek said:

I'm curious to see if that's the case.  Are the number of elements drawn the same between fully zoomed in and fully zoomed out using screenshot mode?  Are we really rendering elements on the entire map?

Anything off screen is being culled by the camera; I can assume this because it would be foolish not to and can observe this by zooming out to see a fps drop.

You would be surprised by the astronomical increase in performance when clumping a bunch of draws into a single call. In my works on this subject, in Unity but not with ONI, I could increase performance nearly 100%. It can mean the difference of being unable to run an application on a low end laptop to instead running the application comfortably vsynced.

Share this post


Link to post
Share on other sites
gabberworld    13

so lets talk about this lag again.

i opened full map in sandbox and when i move the camera i notice there is huge freeze every sec or so what means there is huge loop delay somewhere what usually caused by allot memory List used in same time at same Threat

also i use camera movement in pause mode so there should be no real time calculations at that time.

 

Edited by gabberworld

Share this post


Link to post
Share on other sites
ExEvolution    9

Sandbox/debug is the best way to see lag because once you uncover a cell, it starts processing it. Before you uncover it, it doesn't process. Out of sight, out of processor

  • GL Happy 1

Share this post


Link to post
Share on other sites
gabberworld    13

unityengine game mainthreat is designed for drawing  , if using that also for something else it just cant draw objects normally anymore at same time.

tried explain this as easy as possible

Share this post


Link to post
Share on other sites
gabberworld    13

so i played with sandbox by opening map with Reveal tool till i find out where huge graphic delay starts.

so i finally opened all game area expect this area

bug_.thumb.png.2c20fdb6d9cf2279dc76213bace6cef9.png

and finally when i open also this area then game starts freeze like hell. maybe developers want check that out.

note: it not happens when dupes go self to there, but bug is still a bug

Edited by gabberworld

Share this post


Link to post
Share on other sites
babba    558
On 10/16/2020 at 5:13 PM, gabberworld said:

by the way i can run this game  fine expect some minor issues there and there. i can life with those. but it really makes me red bull if someone complain in 2d game that he have lag, 3d games are more complex.

 

The complexity of a game is not necessarily determined by its visual output.

  • Like 1

Share this post


Link to post
Share on other sites
Gurgel    1806
9 hours ago, babba said:

The complexity of a game is not necessarily determined by its visual output.

Look at Chess software for an example, or at Go software for a more extreme one...

Edited by Gurgel

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now