Jump to content

Recommended Posts

5 minutes ago, KittenIsAGeek said:

...

...

You DO realize that the long names in the files we can edit are only for US, right?  That they get crunched down to bit locations inside the actual running game, right?  

no, those names are used at unityengine for find the game objects. if you developed at unity engine before then you may understand .

also unityengine is free for personal usage , you may try that out if interested

Link to comment
Share on other sites

I will repeat: What YOU see when you're working with stuff for Unity is different from what the COMPUTER sees.  There is a code set that takes what YOU see and turns it into what the COMPUTER wants to see.    I guarantee that while the game is running, those long names aren't a part of the functional code. 

Link to comment
Share on other sites

1 minute ago, KittenIsAGeek said:

I will repeat: What YOU see when you're working with stuff for Unity is different from what the COMPUTER sees.  There is a code set that takes what YOU see and turns it into what the COMPUTER wants to see.    I guarantee that while the game is running, those long names aren't a part of the functional code. 

soo you want say if open asm debbuger atm i not find a single text command name from there? if that true then its good but i doubt it

Link to comment
Share on other sites

2 minutes ago, gabberworld said:

soo you want say if open asm debbuger atm i not find a single text command name from there? if that true then its good but i doubt it

An ASM debugger turns machine code into assembly code to make it easier for us to understand.  If the ASM debugger is smart, then it will use the framework of whatever application development kit you're working with to make it as human-friendly as possible.  So sure, you'll see those long text command names in there -- because the debugger put them back in so you could see what's going on.

Link to comment
Share on other sites

2 hours ago, KittenIsAGeek said:

An ASM debugger turns machine code into assembly code to make it easier for us to understand.  If the ASM debugger is smart, then it will use the framework of whatever application development kit you're working with to make it as human-friendly as possible.  So sure, you'll see those long text command names in there -- because the debugger put them back in so you could see what's going on.

 for you unlucky hex-editor shows allot game function names  at game dll, what also means higher intrusion names needed read from memory what also leads to slow down the game from reading  at from memory

this game is developed at c#, if we talking about c++ then that is totally differed level for performance

c++ needs more time tho than c# for write game what is one reason why they use c# instead atm.

memory performance comes also small things like using byte instead off int

---

anyway if you have big issues at game you can little bit reduce game tiles till you find your own performance what your pc likes

Link to comment
Share on other sites

5 hours ago, gabberworld said:

for you unlucky hex-editor shows allot game function names  at game dll

And how would that be different from any other dll, including the system ones?

That facts a library has symbols in it, doesn't mean that's what used dynamically to lookup for code. Usually symbols are there to aid with debugging. An executable can have that mapping solved at compile time or load time. Sometimes load time can be delayed till first execution, the symbols are used once, but after the code is loaded in the address space of a process, it's referenced by its address, not the name. That's what dynamic loading means literally. 
DLL files contains different sections, so that you don't need to load all of them in memory. Metadata like symbolic names need not to be loaded in memory / can be discarded at will in normal operations (unless you're debugging of course).
That's what @KittenIsAGeek is referring to. A debugger / disassembler would use whatever info available as metadata in the executable file(s) plus everything else is available as part of the SDK to provide human friendly output.

5 hours ago, gabberworld said:

this game is developed at c#, if we talking about c++ then that is totally differed level for performance

Generally speaking maybe, but for a specific case like this you can't make such a bold statement. Especially when complex libraries / engines are involved. If most of the CPU-heavy stuff in done in the external libraries / by external engines, and the main program has to make only a few calls, doing that in C++ instead of C# would barely make any difference.


Quite frankly, to assume that the devs are completely clueless about programming optimizations is borderline insulting.

For example, heat exchange code has been mentioned in this thread, many times. According to this:

it's actually an external DLL written in C.

It's reasonable to assume that SimDLL contains most of the CPU heavy code for the simulation. Oh, it appears that the devs know what they're doing, and actually optimized the game by writing parts of it in C. It also appears that if the heavy-lifting parts are already written in C, there's little to be gained by rewriting the rest of the program in C++.

Link to comment
Share on other sites

25 minutes ago, TheMule said:

Oh, it appears that the devs know what they're doing, and actually optimized the game by writing parts of it in C. It also appears that if the heavy-lifting parts are already written in C, there's little to be gained by rewriting the rest of the program in C++.

and still user complain that they have lag issues at game what have only 98304 tiles, for 2d game like factorio 98304 tiles is like nothing and its pure c++ and lua made game

Link to comment
Share on other sites

3 minutes ago, gabberworld said:

and still user complain that they have lag issues at game what have only 98304 tiles, for 2d game like factorio 98304 tiles is like nothing and its pure c++ and lua made game

You appear to think you know a lot more about game programming than people at Klei. I suggest you send them your CV.

Link to comment
Share on other sites

1 hour ago, TheMule said:

You appear to think you know a lot more about game programming than people at Klei. I suggest you send them your CV.

why? i not search job in Klei, my main skills comes from network develop programs where every byte is important.in c# most off stuff are already made from other peoples so you may even not know if code is good or not

game makers main goal is make money. and that is all, performance is not that what they usually search

Link to comment
Share on other sites

1 hour ago, gabberworld said:

game makers main goal is make money. and that is all, performance is not that what they usually search

On 7/3/2020 at 1:54 PM, Ipsquiggle said:

Most of the team has been hard at work staying safe and developing the new DLC, even while we've continued to fix issues and performance on the released version of the game.

Ipsquiggle must have misspoke, yeah, that must be it.

And yes, Klei are also money hungry, obviously:

https://forums.kleientertainment.com/announcement/67-a-note-to-our-community
 

Link to comment
Share on other sites

31 minutes ago, yoakenashi said:

Ipsquiggle must have misspoke, yeah, that must be it.

And yes, Klei are also money hungry, obviously:

https://forums.kleientertainment.com/announcement/67-a-note-to-our-community
 

it goes off topic.

also game is made in unityengine they not care about this at all if user uses older machine. answer from them is buy better one.

but it's off topic

another thing is what user can try is reduce  monitor resolution if he have very old machine.

Link to comment
Share on other sites

3 hours ago, gabberworld said:

and still user complain that they have lag issues at game what have only 98304 tiles, for 2d game like factorio 98304 tiles is like nothing and its pure c++ and lua made game

People are complaining of lag because of what I posted earlier: This game literally smashes into the physical limitations of modern computer hardware.  The number of memory swaps and calculations for this simulation are incredible -- far greater than any other game I've played.  In fact, as a real-time pseudo-physics simulator, its more impressive than some of the tools I've used for homework assignments.  The calculations themselves aren't the big problem, though, as players have observed that moving to a faster CPU hasn't given them that much improvement.  The bandwidth between the RAM and the CPU appears to carry more weight.

Is there room for more optimization? Sure -- there always is.  BUT that isn't the root of the problem; the sheer amount of data we're number crunching is.  

My original post in this thread was a response to the 'how can I get better performance' question.  My advice still remains:

  1. Only dig out as much as you need to in order to get stuff done.
  2. Avoid multi-gas rooms.
  3. Avoid large gas rooms.
  4. When you use storage bins, only put a single element in them so that temperatures are averaged instead of continually calculated.
  5. Reduce the number of possible paths any particular dupe can take.

If you reduce the number of calculations, you're also reducing the amount of memory transfers that will be happening and you'll see improvement.

 

Finally, you can't compare this game to Factorio.  They do not operate in similar methods.  The physics engine of ONI is far more involved than that of Factorio.  

59 minutes ago, gabberworld said:

another thing is what user can try is reduce  monitor resolution if he have very old machine.

Monitor resolution will not solve the issue.  Graphics capabilities are not the problem here.  A very simple graphics card will handle everything ONI does.  

Link to comment
Share on other sites

10 minutes ago, KittenIsAGeek said:

Finally, you can't compare this game to Factorio.  They do not operate in similar methods.  The physics engine of ONI is far more involved than that of Factorio.

true they are different games.

i just want point this out atm, i been play memory stuff if different platforms, Lua, Delphi, C++, Lazarus, C#. they all act differently. with exactly same data

in differently i mean speed performance

Link to comment
Share on other sites

3 hours ago, gabberworld said:

my main skills comes from network develop programs where every byte is important

This here is the root of the misunderstanding on this thread.  Network development is a very different beast from a physics simulation.  For a network, you are transferring a payload across a network.  You have to deal with the size of your packet's frame and the other traffic on the network.  Latency is orders of magnitude greater than anything between the CPU and RAM will have, and it is going to depend a lot on hardware in the middle, so a small improvement in your transfer rates will show a huge performance improvement.

Data transfer between CPU and RAM doesn't happen this way.  Instead, the CPU says "I want the data at this address."  Then the RAM says, "OK, here you go."  Nothing is put into frames, and there's no dealing with other traffic.   The maximum rate you can transfer any data is limited by the speed and bandwidth of the RAM.  It is, incidentally, the same rate ANY data is transferred between the CPU and RAM.  The reason L1 and L2 caches were instituted was because newer CPUs operate far faster than data can be moved from RAM.  In an average program, smart parts of the CPU predict ahead of time what memory is likely going to be needed and will move it into place.  This works great most of the time, but there are some particular points of failure.  The point of failure that ONI runs into is that so much of the memory needs to be analyzed, the pre-fetch can't have the next set of data ready ahead of time.  

Lets see if I can simplify this...

OK, you're writing a report and you need to reference parts of a book.  Ahead of time you open up the book and put sticky notes on the pages that have the data you're going to be using.  Then when you're writing the report, you simply flip to the next tab.  Works great.  Now.. what if you're translating the book into another language?  You can't just put tabs ahead of time on pages, because you're going to be using every single page.  This is what is happening with ONI.  

Link to comment
Share on other sites

9 minutes ago, KittenIsAGeek said:

This here is the root of the misunderstanding on this thread.  Network development is a very different beast from a physics simulation.  For a network, you are transferring a payload across a network.  You have to deal with the size of your packet's frame and the other traffic on the network.  Latency is orders of magnitude greater than anything between the CPU and RAM will have, and it is going to depend a lot on hardware in the middle, so a small improvement in your transfer rates will show a huge performance improvement.

 

in network i mean network programs what needs hold allot users in live mode who communicate each other all the time

what also means i need have fastest memory assess as possible, asm best for that

Link to comment
Share on other sites

24 minutes ago, KittenIsAGeek said:

I assumed this was what you meant.

 

because it needs hold users online it also needs use memory for store data who is online, if using poorly made memory index for get stored data, huge delay starts at bigger loops , in my eyes 98304 game tiles is same like holding 98304 customers online and send every customer the needed data from memory like water, gas. now if i make poorly code, my bandwidth usage increases. and there comes the goal for make it small as possible that every 98304 customer is happy

 

 

also yes i understand that this game uses calculation for gases and other stuff, and that's where comes the multi cpu usage with multithreat  ,back  in year 2000 you may only dream if you can todo this with home pc, using multi calculations at same time

Link to comment
Share on other sites

The simulation is the majority of all of the game processing, this is why your framerate can drop from 60 fps to a crawl the moment you unpause on a large base.

You have to calculate the physics for every cell in the map, this means things like entropy, motion, and merging and splitting of gasses/liquids packets many times per second. These calculations alone are very heavy on the CPU, and this is before you ever account for duplicant and critter pathfinding as they try to locate materials to perform their chores, and the game cheats the system a little. Things you have not discovered yet do not process physics. Your game runs buttery smooth when the only discovered area is the printing pod, it just doesn't have to do that much work, but as you expand, the number of calculations per second goes up, and performance goes down.

 

When the number of calculations per second gets too high, you start to notice things like duplicants stopping to "think" about what they're going to do next, and the problem only gets more noticable at higher simulation speeds.

 

You're really doing an apples to oranges comparison when comparing network sending the contents of the data, compared to a physics simulation engine. They just aren't the same and have nowhere near as much processing happening.

 

Things you can do to reduce these issues are to build your bases around how the game actually works. Reduce the number of calculations. Fewer paths to get around the base, reduce the amount of mixed gasses, or even better, vacuum out the whole base and only pressurize areas that need it. Have all duplicants use atmo suits instead of pressurizing the map. Don't expose space until you're ready to build there, etc.

 

They may be able to improve multithreading in the game, but multithreading is hard, going too hard on high thread counts may negatively impact performance for lower spec systems, and may negatively impact stability of the game itself. Multithreading still needs to be synchronized, most games tend to have a main thread that does the bulk of the work, and offloads certain functions to other threads, things where how in sync they are doesn't matter as much.

Link to comment
Share on other sites

I spent a couple hours reading the Assembly-CSharp code and came into a conclusion : they made Unity the game, pretty much everything is GameObject. Most of them game code are CSharp. There's the physic engine a.k.a SimDLL but they only handle physic things, or pipes.

Native language is a lot faster than CSharp. I would prefer SimDLL the game, and Unity is only the renderer, but may be native language (possible C/C++) is too hard to write ...

Link to comment
Share on other sites

On 10/14/2020 at 10:07 PM, ExEvolution said:

going too hard on high thread counts may negatively impact performance for lower spec systems, and may negatively impact stability of the game itself.

no multi threat is that what actually allows also use the older pc players run game more smoother ,

let me tell this other way if game or any other app don't use multi-threat then only 25% CPU is possible use for calculation if you have 4 core PC

--

that was very big issue when first 2 core cpus released, you may not even know that as some here is too young. most games was not possible run at all normally because off that as old days code was also only single treated 

by the way i still have the working Pentium 4 pc in home from that old days where multi threat was myth.

--

you say network is different as im talking about network programs here. if i not use multi threat then the  64 core server is useless.  as i can use only 1 core at time then

-

in 4 core pc it means i can divide 98304 tiles database with 4 if i use multithreat, it means one pc core only needs handle 24576 tiles instead off 98304

-

Unity can run GameObject only in mainthreat (aka 1 cpu core only) but that dosen't mean you can't make other calculations at other threats

-

you not need read hours the Assembly-CSharp for understand that they are using Unity complier.

unity by default add always _Data for they folders

 

Link to comment
Share on other sites

10 hours ago, MinhPham said:

Native language is a lot faster than CSharp. 

Not necessarily. If you have an external library that renders a very complex 3D scene, and it takes minutes, and you have this pseudocode:

import 3dlib

call 3dlib.RenderScene('scene.dat')

you can write that in C, C++, C#, LPC, Java, JavaScript,  Python, Lua, PHP, Pascal, whatever it doesn't impact the performance. That's because the all the CPU intensive stuff is outside the language.

And even if rendering the scene involves multiple steps, calling those steps from a higher level program doesn't affect the performance. That is as long as the driver program doesn't implement actual algorithms that manipulate the data.

It has been speculated in this thread that lag is caused by physics simulation code written in C#, which is not true.

Link to comment
Share on other sites

22 minutes ago, bumbaclad said:

The majority of lag is likely due to draw calls. They need to just move the game over to shader chunks instead which is going to make things like modding much more complicated but the user will see a 50+% performance increase.

yes correct some stuff should needed move away from main threat. for example when game save works at separate threat then there basically will be no game freeze when save happening, this all comes from default apps, games are not different expect nice looking moving objects

Link to comment
Share on other sites

16 minutes ago, gabberworld said:

yes correct some stuff should needed move away from main threat. for example when game save works at separate threat then there basically will be no game freeze when save happening, this all comes from default apps, games are not different expect nice looking moving objects

That is not moving processing to another cpu thread but instead moving the majority of work over to the gpu which by comparison makes the cpu look like a toddler; although I would like to see saving moved to another thread like Don't Starve Togather did but thinking there must be a good reason that did not happen with ONI. Multithreading on the cpu is not the answer and in most cases, when pertaining to games, not the answer. But on the video card there actually is real simultaneous multithreading happening behind the scenes without programmer intervention. By drawing everything in a few draw calls rather then hundreds then even a low end machine has a chance to run the game and users on high end machine can then play the end game at an acceptable framerate.

Link to comment
Share on other sites

11 minutes ago, bumbaclad said:

That is not moving processing to another cpu thread but instead moving the majority of work over to the gpu which by comparison makes the cpu look like a toddler. Multithreading on the cpu is not the answer and in most cases, when pertaining to games, not the answer. By drawing everything in a few draw calls rather then hundreds then even a low end machine has a chance to run the game and users on high end machine can then play the end game at an acceptable framerate.

if you not use multi threat then it may hit in some point to the cpu limit what means calculation freeze and there comes also fps drop, you cant calculate more than cpu can todo at same time. thats where comes multi threat what todo calculations , loops in separate mode.

we  not live anymore at year 2000 where threat is run at same cpu core

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

Please be aware that the content of this thread may be outdated and no longer applicable.

×
  • Create New...