No edit summary |
No edit summary |
||
Line 8: | Line 8: | ||
# Shares typesystem checksum | # Shares typesystem checksum | ||
...if the above is true, the cache will really hold a SharedBigValue. | ...if the above is true, the cache will really hold a SharedBigValue. | ||
* All public access methods to get to a cache value will screen for a SharedBigValue - and if found - resolve to a real value and return this. | |||
All public access methods to get to a cache value will screen for a SharedBigValue - and if found - resolve to a real value and return this. | |||
Only when objects are loaded from PS and hit the ApplyDataBlock method do we consider creating or looking up SharedBigValue. | Only when objects are loaded from PS and hit the ApplyDataBlock method do we consider creating or looking up SharedBigValue. | ||
* We do this by keeping a static dictionary on the Cache that is key, SharedBigValue. | |||
We do this by keeping a static dictionary on the Cache that is key, SharedBigValue. | * If the key already exists, we return the existing SharedBigValue - otherwise, we create a SharedBigValue and return it (and store it in the dictionary). | ||
If the key already exists, we return the existing SharedBigValue - otherwise, we create a SharedBigValue and return it (and store it in the dictionary). | |||
Reading is protected by a ReadLock that can be upgraded to WriteLock if we need to create. | Reading is protected by a ReadLock that can be upgraded to WriteLock if we need to create. | ||
==== Limitations I consider okay until reality proves otherwise: ==== | |||
Limitations | |||
# It is only DB loaded (old value) that is the target for SharedBigValue - thus write/update of large blocks are handled as before - and we do not try to share this. | # It is only DB loaded (old value) that is the target for SharedBigValue - thus write/update of large blocks are handled as before - and we do not try to share this. | ||
# We do not actively destroy SharedBigValue's if a new model is uploaded - changing the checksum - and forcing all existing ecospaces to be recreated - this is considered to be an uncommon production scenario. | # We do not actively destroy SharedBigValue's if a new model is uploaded - changing the checksum - and forcing all existing ecospaces to be recreated - this is considered to be an uncommon production scenario. | ||
'''Ways to test:''' Model with Image and Text, run Turnkey with two different users, or two different browsers, update large text and image in one - make sure it updates in the other. | |||
Ways to test: Model with Image and Text, run Turnkey with two different users, or two different browsers, update large text and image in one - make sure it updates in the other | |||
Expected positive effect: Only one instance of large things is held in memory even if 1000 users look at this same thing. | ''Expected positive effect:'' Only one instance of large things is held in memory even if 1000 users look at this same thing. | ||
Expected negative effect: additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable. | ''Expected negative effect:'' additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable. | ||
Currently, this feature is always on, you can stop it from having an effect by setting: | Currently, this feature is always on, you can stop it from having an effect by setting: | ||
FetchedBlockHandler.kBigValueThreshold=int.MaxValue; | FetchedBlockHandler.kBigValueThreshold=int.MaxValue; |
Revision as of 06:45, 11 July 2023
When a server process like MDriven Turnkey or MDrivenServer service holds several ecospaces in the same process, we now (from 2023-04-09) have a mechanism called SharedBigValue.
What this does is:
- If a loaded attribute value is a byte[] or string
- Larger than 8192 bytes
- Is the maxint in version (latest version)
- Shares the same object id, and attribute id
- Shares typesystem checksum
...if the above is true, the cache will really hold a SharedBigValue.
- All public access methods to get to a cache value will screen for a SharedBigValue - and if found - resolve to a real value and return this.
Only when objects are loaded from PS and hit the ApplyDataBlock method do we consider creating or looking up SharedBigValue.
- We do this by keeping a static dictionary on the Cache that is key, SharedBigValue.
- If the key already exists, we return the existing SharedBigValue - otherwise, we create a SharedBigValue and return it (and store it in the dictionary).
Reading is protected by a ReadLock that can be upgraded to WriteLock if we need to create.
Limitations I consider okay until reality proves otherwise:
- It is only DB loaded (old value) that is the target for SharedBigValue - thus write/update of large blocks are handled as before - and we do not try to share this.
- We do not actively destroy SharedBigValue's if a new model is uploaded - changing the checksum - and forcing all existing ecospaces to be recreated - this is considered to be an uncommon production scenario.
Ways to test: Model with Image and Text, run Turnkey with two different users, or two different browsers, update large text and image in one - make sure it updates in the other.
Expected positive effect: Only one instance of large things is held in memory even if 1000 users look at this same thing.
Expected negative effect: additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable.
Currently, this feature is always on, you can stop it from having an effect by setting:
FetchedBlockHandler.kBigValueThreshold=int.MaxValue;