Hans Karlsen (talk | contribs) (Created page with "When a server process like MDriven Turnkey, or MDrivenServer service hold several ecospaces in the same process we now (from 2023-04-09) have a mechanism called SharedBigValue...") |
Hans Karlsen (talk | contribs) No edit summary |
||
Line 33: | Line 33: | ||
Expected negative effect: additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable | Expected negative effect: additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable | ||
Currently this feature is always on. |
Revision as of 17:37, 9 April 2023
When a server process like MDriven Turnkey, or MDrivenServer service hold several ecospaces in the same process we now (from 2023-04-09) have a mechanism called SharedBigValue.
What this does is:
- If a loaded attribute value is byte[] or string
- larger that 8192 bytes
- Is the maxint in version (latest version)
- shares the same object id, and attribute id
- shares typesystem checksum
...if above is true the cache will really hold a SharedBigValue.
All public access methods to get to a cache value will screen for a SharedBigValue - and if found - resolve to a the real value and return this.
Only when objects are loaded from PS and hits the ApplyDataBlock method we consider creating or looking up SharedBigValue.
We do this by keeping a static dictionary on the Cache that is key,SharedBigValue.
If key already exists we return the existing SharedBigValue other wise we create a SharedBigValue and return it (and store it in the dictionary).
Reading is protected by a ReadLock that can be upgraded to WriteLock if we need to create
---------
Limitations that I condsider to be ok until reality proves otherwise:
- It is only db loaded (old value) that is target for SharedBigValue - thus write/update of large blocks are handled as before - and we do not try to share this.
- We do not actively destroy SharedBigValue's if a new model is uploaded - changing the checksum - and forcing all existing ecospaces to be recreated - this is considered to not be a common production scenario.
----
Ways to test: Model with Image and Text, run Turnkey with two different users, or two different browsers, update large text and image in one - make sure it updates in other
----
Expected positive effect: Only one instance of large things are held in memory even if 1000 users look at this same thing
Expected negative effect: additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable
Currently this feature is always on.