from keras.models import Sequential from keras.layers import LSTM, GRU, SimpleRNN, Dense, Embedding from keras.preprocessing import sequence max_features = 20000 maxlen = 100 # truncate reviews to 100 words batch_size = 32 Build model model = Sequential() model.add(Embedding(max_features, 128, input_length=maxlen)) model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2)) # or GRU(128) model.add(Dense(1, activation='sigmoid')) Compile (Theano backend) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) Train model.fit(x_train, y_train, batch_size=batch_size, epochs=5, validation_data=(x_val, y_val))

In Python (with Theano-style tensors), a naive implementation looks like:

Vanilla RNNs suffer from the vanishing/exploding gradient problem — they can't learn long-range dependencies (e.g., information from 50 steps ago). This is where LSTM and GRU come in. LSTM (Long Short-Term Memory) LSTMs introduce a cell state (a conveyor belt of information) and three gates: forget, input, and output. These gates learn what to remember, what to write, and what to output.

h_t = T.tanh(T.dot(x_t, W_xh) + T.dot(h_prev, W_hh) + b_h)