The Windows implementation of the registry deals with two entities: a registry hive in memory and a registry hive on a disk. When a registry hive is mounted, it's being read or mapped into memory from a disk. When you change something in a mounted registry hive (e.g. when you create a new key or modify a value), these changes happen directly in memory. The process of writing these changes back to a disk is called a hive flush.
Before Windows 8.1 and Windows Server 2012 R2, the flush process writes modified data (also known as dirty data) to a transaction log file first (overwriting modified data from previous flush processes), then the flush process writes the same data to a primary file. If a system crash (e.g. power outage) occurs when writing to a transaction log file, a primary file will remain consistent, because no data was written to this file during the failed flush operation; if a system crash occurs when writing to a primary file, a copy of modified data from a transaction log file will be used to finish writing to a primary file, thus bringing the file back to the consistent state. Before writing to a primary file, the flush process will invalidate its header to record the inconsistent state, and after modified data was successfully stored in a primary file, the flush process will validate the header (so it's possible to tell whether a primary file is consistent or not by examining its header). The flush process for a hive is triggered by a kernel at regular time intervals or by a userspace program using the RegFlushKey() routine. This flush strategy involves writing the same data twice, thus reducing the performance of an operating system.
In Windows 8.1 and Windows Server 2012 R2, a new flush strategy was implemented: when the flush process is triggered for a specific hive (either by a kernel or by a userspace program) for the first time or after the status of a transaction log file has been reset, a log entry with modified (dirty) data is written to a transaction log file, and the header of a primary file is invalidated. When the flush process is triggered again, a new log entry with modified data is appended to a transaction log file, and a primary file remains untouched. If a system crash occurs, a transaction log file will contain log entries required to recover the consistency of a primary file and to bring it to the up-to-date state. When all users (local and remote) become inactive, or when a hive starts unloading (e.g. during the full shutdown), or when an hour has been elapsed since the latest write to a primary file, the reconcile process will write all modified data to a primary file, validate its header, and reset the status of a transaction log file (next flush processes will be overwriting old log entries with new ones). The new flush strategy has performance improvements based on the significant decrease of the number of disk writes.