Hans Karlsen (talk | contribs) (Created page with "When a server process like MDriven Turnkey, or MDrivenServer service hold several ecospaces in the same process we now (from 2023-04-09) have a mechanism called SharedBigValue...") |
No edit summary |
||
(6 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
When a server process like MDriven Turnkey | === 2023-04-09 === | ||
When a server process like MDriven Turnkey or MDrivenServer service holds several ecospaces in the same process, we now have a mechanism called SharedBigValue. | |||
What this does is: | What this does is: | ||
# If a loaded attribute value is byte[] or string | # If a loaded attribute value is a byte[] or string | ||
# | # Larger than 8192 bytes | ||
# Is the maxint in version (latest version) | # Is the maxint in version (latest version) | ||
# | # Shares the same object ID, and attribute ID | ||
# | # Shares typesystem checksum | ||
...if above is true the cache will | ...if the above is true, the cache will hold a SharedBigValue. | ||
* All public access methods to get to a cache value will screen for a SharedBigValue - and if found - resolve to a real value and return this. | |||
Only when objects are loaded from PS and hit the ApplyDataBlock method do we consider creating or looking up SharedBigValue. | |||
* We do this by keeping a static dictionary on the key Cache - SharedBigValue. | |||
* If the key already exists, we return the existing SharedBigValue - otherwise, we create a SharedBigValue and return it (and store it in the dictionary). | |||
Reading is protected by a ReadLock that can be upgraded to WriteLock if we need to create. | |||
==== Limitations I consider okay until reality proves otherwise: ==== | |||
# It is only DB loaded (old value) that is the target for SharedBigValue - thus, write/update of large blocks are handled as before - and we do not try to share this. | |||
# We do not actively destroy SharedBigValue's if a new model is uploaded - changing the checksum - and forcing all existing ecospaces to be recreated. This is considered to be an uncommon production scenario. | |||
'''Ways to test:''' Model with Image and Text, run Turnkey with two different users or two different browsers, and update large text and image in one - make sure it updates in the other. | |||
Only | ''Expected positive effect:'' Only one instance of large things is held in memory even if 1000 users look at this same thing. | ||
''Expected negative effect:'' Additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable. | |||
Currently, this feature is always on. You can stop it from having an effect by setting: | |||
FetchedBlockHandler.kBigValueThreshold=int.MaxValue; | |||
{{Edited|July|12|2024}} | |||
[[Category:MDriven Turnkey]] | |||
[[Category:MDriven Server]] | |||
Latest revision as of 05:51, 19 March 2024
2023-04-09
When a server process like MDriven Turnkey or MDrivenServer service holds several ecospaces in the same process, we now have a mechanism called SharedBigValue.
What this does is:
- If a loaded attribute value is a byte[] or string
- Larger than 8192 bytes
- Is the maxint in version (latest version)
- Shares the same object ID, and attribute ID
- Shares typesystem checksum
...if the above is true, the cache will hold a SharedBigValue.
- All public access methods to get to a cache value will screen for a SharedBigValue - and if found - resolve to a real value and return this.
Only when objects are loaded from PS and hit the ApplyDataBlock method do we consider creating or looking up SharedBigValue.
- We do this by keeping a static dictionary on the key Cache - SharedBigValue.
- If the key already exists, we return the existing SharedBigValue - otherwise, we create a SharedBigValue and return it (and store it in the dictionary).
Reading is protected by a ReadLock that can be upgraded to WriteLock if we need to create.
Limitations I consider okay until reality proves otherwise:
- It is only DB loaded (old value) that is the target for SharedBigValue - thus, write/update of large blocks are handled as before - and we do not try to share this.
- We do not actively destroy SharedBigValue's if a new model is uploaded - changing the checksum - and forcing all existing ecospaces to be recreated. This is considered to be an uncommon production scenario.
Ways to test: Model with Image and Text, run Turnkey with two different users or two different browsers, and update large text and image in one - make sure it updates in the other.
Expected positive effect: Only one instance of large things is held in memory even if 1000 users look at this same thing.
Expected negative effect: Additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable.
Currently, this feature is always on. You can stop it from having an effect by setting:
FetchedBlockHandler.kBigValueThreshold=int.MaxValue;