Hacker Newsnew | past | comments | ask | show | jobs | submit | sergeyprokhoren's commentslogin

The PostgreSQL uuidv7() developers are leaning toward a different approach to solving timestamp leaks, similar to FHSS (https://en.wikipedia.org/wiki/Frequency-hopping_spread_spect...). This is currently possible manually using the parameter "shift" (https://postgrespro.ru/docs/postgresql/18/functions-uuid), but could be automated in the near future


PostgreSQL Gains a Built-in UUIDv7 Generation Function for Primary Keys (many interesting details)

https://habr.com/en/news/950340/


Offsetting the timestamp is much better. Use the uuidv7(INTERVAL shift) function in PostgreSQL 18

https://www.postgresql.org/docs/18/functions-uuid.html

in Percona Server for MySQL: https://docs.percona.com/percona-server/8.4/uuid-versions.ht...


95% of the comments here have nothing to do with reality.

What May Surprise You About UUIDv7 https://medium.com/@sergeyprokhorenko777/what-may-surprise-y...


Bad idea. In PostgreSQL 18 the optional parameter shift will shift the computed timestamp by the given interval

https://www.postgresql.org/docs/18/functions-uuid.html


That still exposes the timestamp, and the shift just drops precision, so I'm not sure what you're going for here.


If you shift the timestamp forward by 5 thousand years, it can hardly be called just a decrease in precision.



What does looks "old school" mean? Do you want to wrap this format in JSON like JSON-LD? I don't mind


See "DSL for Bitemporal Sixth Normal Form with UUIDv7" https://github.com/sergeyprokhorenko/6NF_DSL


Oh, this is neat... thank you for sharing! Naurally, the "I don't know what I don't know" problem plagues me, as a solo maker trying to feel his way about this (temporal) space.

Maybe it's time (hehe) someone started a 6NF Conf.


This is a fairly common problem. Data is often transferred between information systems in denormalized form (tables with hundreds of columns - attributes). In the data warehouse, they are normalized (data duplication in tables is excluded by using references to reference tables) to make it easier to perform complex analytical queries to the data. Usually, they are normalized to 3NF and very rarely to 6NF, since there is still no convenient tool for 6NF (see my DSL: https://medium.com/@sergeyprokhorenko777/dsl-for-bitemporal-... ). And then the data is again denormalized in data marts to generate reports for external users. All these cycles of normalization - denormalization - normalization - denormalization are very expensive for IT departments. Therefore, I had an idea to transfer data between information systems directly in normalized form, so that nothing else would have to be normalized. The prototypes were the Anchor Modeling and (to a much lesser extent) Data Vault methodologies.


Cool to see you tackle this problem.

If I were you though, I'd consider if I'd get more traction with an open source extension of Iceberg format that supports row based reporting and indexes for a unified open source HTAP ecosystem.


Nice. Anchor Modelling is underappreciated.

Gonna have a look at your DSL.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: