Gated Recurrent Units (GRU)

instead of an input, output and a forget gate, GRUs have an update gate. This update gate determines both how much information to keep from the last state and how much information to let in from the previous layer. The reset gate functions much like the forget gate of an LSTM but it’s located slightly differently. They always send out their full state, they don’t have an output gate.

GRUs are slightly faster and easier to run but also slightly less expressive. In some cases where the extra expressiveness is not needed, GRUs can outperform LSTMs.