What is wrong with two-phase locking?

What is wrong with two-phase locking?
Why can't the whole problem with all databases be solved so easily with one simple magic algorithm?
 ✓ First of all, with these locks, you inevitably get deadlocks when it's not clear whether it's a chicken or an egg.
One transaction needs a resource x, another one y, they lock it, and then they need the same resources crosswise, and it is unclear who should release the lock first. For this purpose, the databases have special systems for monitoring deadlocks and so-called deadlock shooting. Deadlocks cannot be resolved peacefully, but only by rolling back one of the transactions.
Usually the mathematics inside the deadlock detection is a deadlock count, where the transaction ID is marked on the tops, and the directed edges indicate which one waits for the lock from which one. On this graph, small subheadings from one of these vertices are highlighted, it looks, for example, if one transaction waits for a very large number of transactions, this transaction is nailed.
But there are other beautiful mathematical approaches, which can be searched by deadlock-detection topic.
✓ The second point is slow - nobody sybase performance management wants to wait for the deadlock.
There are such transactions that take a resource for a long time, for example, some report thinks it has occupied a resource and everyone else has to wait. To prevent this from happening, we have come up with some improvements which I will tell you about a bit later.

✓ But this way the serialization is ensured.
Without two-phase blocking, there is no serialization. That is, you have to think of ways to improve two-phase blocking to reduce waiting time.
In any modern database, biphasic locking is the main postgresql performance management way to ensure integrity and serialization, even if we are talking about versioned databases.