diff --git "a/data/testing_github_dataset.json" "b/data/testing_github_dataset.json" deleted file mode 100644--- "a/data/testing_github_dataset.json" +++ /dev/null @@ -1,728 +0,0 @@ -[ - { - "instruction": "Multiple sequences", - "output": "

It wasn't the LSTM, it was some other level in the network. Reshaping and flattening where required fixes most issues. Thanks :)

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Working with large datasets like Imagenet", - "output": "

Keras models absolutely do support batch training. The CIFAR10 example offers an example of this.

\n

What's more, you can use the image preprocessing module (data augmentation and normalization) on batches as well. Here's a quick example:

\n
datagen = ImageDataGenerator(\n        featurewise_center=True, # set input mean to 0 over the dataset\n        samplewise_center=False, # set each sample mean to 0\n        featurewise_std_normalization=True, # divide inputs by std of the dataset\n        samplewise_std_normalization=False, # divide each input by its std\n        zca_whitening=False, # apply ZCA whitening\n        rotation_range=20, # randomly rotate images in the range (degrees, 0 to 180)\n        width_shift_range=0.2, # randomly shift images horizontally (fraction of total width)\n        height_shift_range=0.2, # randomly shift images vertically (fraction of total height)\n        horizontal_flip=True, # randomly flip images\n        vertical_flip=False) # randomly flip images\n\ndatagen.fit(X_sample) # let's say X_sample is a small-ish but statistically representative sample of your data\n\n# let's say you have an ImageNet generator that yields ~10k samples at a time.\nfor e in range(nb_epoch):\n    print(\"epoch %d\" % e)\n    for X_train, Y_train in ImageNet(): # these are chunks of ~10k pictures\n        for X_batch, Y_batch in datagen.flow(X_train, Y_train, batch_size=32): # these are chunks of 32 samples\n            loss = model.train(X_batch, Y_batch)\n\n# Alternatively, without data augmentation / normalization:\nfor e in range(nb_epoch):\n    print(\"epoch %d\" % e)\n    for X_train, Y_train in ImageNet(): # these are chunks of ~10k pictures\n        model.fit(X_batch, Y_batch, batch_size=32, nb_epoch=1)
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Adding Batch Size as explicit parameter for Batch Normalization layer", - "output": "

You're right

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Setting up tests", - "output": "

Could you add a code coverage button next to the build status in the README as well?

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Model serialization", - "output": "

For people coming here from Google looking for a way to serialize a Keras model: How can I save a Keras model?

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Autoencoder Architechture", - "output": "

Implemented #180

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Requirements for 1Dconvolution", - "output": "

@sjhddh unfortunately I got busy with other things and never finished working on it. My code assumed the rows/columns of the embeddings were flipped from how keras stores them. That branch is now also several hundred commits behind master and I don't know how much else has changed about keras...

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "cifar10.py - imports cPickle error", - "output": "

The change you propose do not appear to be changing anything to what the code is actually doing (it's still the same module being imported). I tried it anyway, but the encoding issue still pops up. We'd probably have to re-save the pkl files to fix it, it's a pain...

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Accessing internal states", - "output": "

Sure, you can. See the answer to this question: #41

\n

It will probably be made more straightforward in future API upgrades.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "glorot_normal init should be glorot_uniform?", - "output": "

Benchmarking on MNIST gives me better results with glorot_uniform compared to glorot_normal. glorot_uniform also appears to perform about as well as lecun_uniform.

\n

glorot_uniform:

\n
Train on 37800 samples, validate on 4200 samples\nEpoch 0\nloss: 0.0257 - acc.: 0.7500 - val. loss: 0.0123 - val. acc.: 0.9348\nEpoch 1\nloss: 0.0092 - acc.: 1.0000 - val. loss: 0.0081 - val. acc.: 0.9512\nEpoch 2\nloss: 0.0112 - acc.: 0.8750 - val. loss: 0.0070 - val. acc.: 0.9590\nEpoch 3\nloss: 0.0031 - acc.: 1.0000 - val. loss: 0.0061 - val. acc.: 0.9631\nEpoch 4\nloss: 0.0029 - acc.: 1.0000 - val. loss: 0.0054 - val. acc.: 0.9664\nEpoch 5\nloss: 0.0027 - acc.: 1.0000 - val. loss: 0.0051 - val. acc.: 0.9674\nEpoch 6\nloss: 0.0047 - acc.: 1.0000 - val. loss: 0.0050 - val. acc.: 0.9657\nEpoch 7\nloss: 0.0012 - acc.: 1.0000 - val. loss: 0.0050 - val. acc.: 0.9679\nEpoch 8\nloss: 0.0119 - acc.: 0.8750 - val. loss: 0.0048 - val. acc.: 0.9700\nEpoch 9\nloss: 0.0011 - acc.: 1.0000 - val. loss: 0.0045 - val. acc.: 0.9712\n
\n

glorot_normal:

\n
Train on 37800 samples, validate on 4200 samples\nEpoch 0\nloss: 0.0208 - acc.: 0.8750 - val. loss: 0.0127 - val. acc.: 0.9367\nEpoch 1\nloss: 0.0113 - acc.: 1.0000 - val. loss: 0.0088 - val. acc.: 0.9490\nEpoch 2\nloss: 0.0045 - acc.: 1.0000 - val. loss: 0.0076 - val. acc.: 0.9548\nEpoch 3\nloss: 0.0245 - acc.: 0.7500 - val. loss: 0.0070 - val. acc.: 0.9598\nEpoch 4\nloss: 0.0090 - acc.: 0.8750 - val. loss: 0.0062 - val. acc.: 0.9643\nEpoch 5\nloss: 0.0032 - acc.: 1.0000 - val. loss: 0.0057 - val. acc.: 0.9660\nEpoch 6\nloss: 0.0009 - acc.: 1.0000 - val. loss: 0.0058 - val. acc.: 0.9650\nEpoch 7\nloss: 0.0032 - acc.: 1.0000 - val. loss: 0.0057 - val. acc.: 0.9643\nEpoch 8\nloss: 0.0155 - acc.: 0.8750 - val. loss: 0.0053 - val. acc.: 0.9679\nEpoch 9\nloss: 0.0053 - acc.: 1.0000 - val. loss: 0.0052 - val. acc.: 0.9679\n
\n

Code is at https://www.kaggle.com/users/123235/fchollet/digit-recognizer/simple-deep-mlp-with-keras

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Add interrupt handlers", - "output": "

Any chance this can be reopened? It's fair that programs not ending on SIGINT are annoying, but I think that somebody that would add this callback explicitly would be very much aware of this fact, and that they'll just need to send SIGINT twice for the running command to stop immediately.

\n

When iterating with a model running in the cloud, I need to run some clean-up code once training ends, and I often realize that I used too many epochs too late into this process. This would help with these cases.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Fix in cifar example", - "output": "

The API for Flatten changed recently. Thanks for pointing this out! It's fixed now.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "standardize_y does not support using alternative classes as datasets", - "output": "

Closed by #48

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "How to convert a caffe model to Keras type?", - "output": "

This Caffe-to-Keras weight converter is what you are looking for:

\n

https://github.com/pierluigiferrari/caffe_weight_converter

\n

It converts .caffemodel files to .h5 weight files. It converts weights only, not the model definition, but the weights are really all you need anyway.

\n

For any given model, the model definition either requires only Keras core library layers, in which case it's super easy to write in Keras manually, or the model definition is complex and has custom layer types, in which case a model definition converter would probably fail anyway.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "How to stack convolutional layer and lstm?", - "output": "\n
class MyReshape(Layer):\n    def get_output(self, train):\n        X = self.get_input(train)\n        nshape = (1,) + X.shape \n        return theano.tensor.reshape(X, nshape)
\n

It turns a batch of N vectors into a batch of size 1 containing a sequence of N vectors.

\n

Note that if you do something like that, the length of the input and of the labels won't match, so you won't be able to use the model.fit() method for training. However, you should be able to use the model.train(X, y) method just fine, on small minibatches.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "How to use GPU", - "output": "

Check out the comments of examples.
\nhttps://github.com/fchollet/keras/search?utf8=\u2713&q=gpu

\n

GPU/CPU setting is handled by Theano. so you can specify in command line or setting file ~/.theanorc
\nhttp://deeplearning.net/software/theano/tutorial/using_gpu.html

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "MLP example doesn't make progress", - "output": "

I totally missed that. Must read more docs. Thank you.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Using the model parameters which gave best validation performance", - "output": "

you can just rewrite the callback class and save the best model parameters in on_batch_end() function.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Can we train the Recurrent Neural Network with sequential targets?", - "output": "

This is possible, you just have to use the return_sequences constructor argument of any recurrent unit (I suggested using GRU or LSTM as recurrent unit...).

\n

With return_sequences = True, the output of the unit will be a sequence of vectors (each of size output_dim), one per element in the input sequence.

\n

More here: http://keras.io/layers/recurrent/

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "missing some files which caused two examples failed!", - "output": "

You will have to download that particular dataset yourself. We are not hosting the download as part of the Keras datasets because the file is quite large.

\n

The header comments on this particular example have instructions on how to download the dataset.

\n

For the Otto example, you will need to get the data from Kaggle: https://www.kaggle.com/c/otto-group-product-classification-challenge/data

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "How could we use Leaky ReLU and Parametric ReLU as activation function ?", - "output": "

There's a PReLU example in the Kaggle Otto example; it can be used as a template for all of the Advanced Activation:

\n
from keras.layers.advanced_activations import LeakyReLU, PReLU\n..\n..\nmodel.add(Dense(512, 512, activation='linear')) # Add any layer, with the default of an identity/linear squashing function (no squashing)\nmodel.add(LeakyReLU(alpha=.001))   # add an advanced activation\n...\n...\nmodel.add(Dense(512, 123, activation='linear')) # Add any layer, with the default of an identity/linear squashing function (no squashing)\nmodel.add(PReLU((123,)))   # add an advanced activation
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "What does inner_init do ?", - "output": "

@fchollet Thanks for your reply !

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Is there a way to access gradients of each layer?", - "output": "

Even I am looking for the same thing. Is it true even if we use keras on top of tensorflow

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Early stopping options for Keras", - "output": "

earlyStopping=keras.callbacks.EarlyStopping(monitor='val_loss', patience=0, verbose=0, mode='auto')
\nmodel.fit(X, y, batch_size=128, nb_epoch=100, verbose=1, callbacks=[earlyStopping], validation_split=0.0, validation_data=None, shuffle=True, show_accuracy=False, class_weight=None, sample_weight=None)

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "How can I get the activation value of each layer?", - "output": "

Sure, you can. See the answer to this question: #41

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Can't get a simple XOR problem network to work, answer always array([0])", - "output": "

If I weren't interested in learning how to use Keras, I wouldn't have raised this issue. If you don't have the time to help, then you shouldn't spend it writing a passive-aggressive retort either.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "[help wanted] any way to update learning rate and momentum", - "output": "

How is the decay applied to the learning rate? Is it multiplied by the decay factor after each epoch? Is the decay subtracted from it? Is it exponential?

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Regression problems / continuous target", - "output": "

Hi Dan,

\n

Keras can in fact work with regression problems, and even multidimensional regression (e.g. autoencoders). In such cases, you would use .predict() to get the output, and everything that is classification-related (class_mode, show_accuracy) would be irrelevant (i.e. if you tried to display classification accuracy it would be ~0 all the way).

\n

Important to note: for regression cases, you would need to use mse or mae as the loss, and you could't use softmax as activation (since the output of the model isn't supposed to be probabilities). I think it would be useful to introduce a regression task in the examples, to point out these gotchas...

\n

Here's a simple 2-layer unidimensional regression:

\n
from keras.models import Sequential\nfrom keras.layers.core import Dense, Activation\n\nmodel = Sequential()\nmodel.add(Dense(10, 64))\nmodel.add(Activation('tanh'))\nmodel.add(Dense(64, 1))\nmodel.compile(loss='mean_absolute_error', optimizer='rmsprop')\n\nmodel.fit(X_train, y_train, nb_epoch=20, batch_size=16)\nscore = model.evaluate(X_test, y_test, batch_size=16)
\n

And here's an autoencoder:

\n
from keras.models import Sequential\nfrom keras.layers.core import Dense, Activation\n\nmodel = Sequential()\nmodel.add(Dense(10, 5))\nmodel.add(Activation('tanh'))\nmodel.add(Dense(5, 10))\nmodel.compile(loss='mean_squared_error', optimizer='rmsprop')\n\nmodel.fit(X_train, X_train, nb_epoch=20, batch_size=16)\nscore = model.evaluate(X_test, X_test, batch_size=16)
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Could Keras handle a large dataset, for instance more than 50GB?", - "output": "

Keras can work with datasets that don't fit in memory, through the use of batch training.

\n

There are two ways to make this work:

\n
# let's say you have a BatchGenerator that yields a large batch of samples at a time\n# (but still small enough for the GPU memory)\nfor e in range(nb_epoch):\n    print(\"epoch %d\" % e)\n    for X_train, Y_train in BatchGenerator(): \n        model.fit(X_batch, Y_batch, batch_size=32, nb_epoch=1)\n\n\n# Alternatively, let's say you have a MiniBatchGenerator that yields 32-64 samples at a time:\nfor e in range(nb_epoch):\n    print(\"epoch %d\" % e)\n    for X_train, Y_train in MiniBatchGenerator(): # these are chunks of ~10k pictures\n        model.train(X_batch, Y_batch)
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "does it automatically use multiple gpu, if availabe?", - "output": "

@fyears @fchollet I have the following code for data parallelism for Keras. The idea is to manually synchronize the model (by averaging) from multiple GPUs after each training batch:

\n
import warnings\nimport multiprocessing\nimport numpy as np\nfrom datetime import datetime\n\n# constant\nNUM_GPU=4\nNUM_EPOCH=100\nMINI_BATCH=128\n\ndef train_model(gpu_id, data_queue, model_queue, num_epoch=NUM_EPOCH, num_batch=1):\n    import theano.sandbox.cuda\n    theano.sandbox.cuda.use(gpu_id)\n    import theano\n    import theano.tensor as T\n    from keras.models import Sequential\n    from keras.layers.convolutional import Convolution2D\n    from demosaic_cnn import Crop2D, mean_absolute_error\n\n    # define the model\n    model=Sequential()\n    # put your model definition here\n\n    # compile the model\n    model.compile(loss=mean_absolute_error, optimizer='RMSprop')\n\n    # train the model\n    best_loss=np.inf\n    best_save='_'.join((gpu_id,datetime.now().strftime('%Y_%m_%d_%H_%M_%S.h5')))\n    for epoch in range(num_epoch):\n        print gpu_id,'@epoch',epoch\n        for batch in range(num_batch):\n            print gpu_id,'@batch',batch\n            data=data_queue.get()\n            loss=model.train_on_batch(data[0], data[1])\n            # after a batch a data, synchronize the model\n            model_weight=[layer.get_weights() for layer in model.layers]\n            # we need to send NUM_GPU-1 copies out\n            for i in range(1,NUM_GPU):\n                model_queue[gpu_id].put(model_weight)\n            for k in model_queue:\n                if k==gpu_id:\n                    continue\n                # obtain the model from other GPU\n                weight=model_queue[k].get()\n                # sum it\n                for l,w in enumerate(weight):\n                    model_weight[l]=[w1+w2 for w1,w2 in zip(model_weight[l],w)]\n            # average it\n            for l,w in enumerate(model_weight):\n                model.layers[l].set_weights([d/NUM_GPU for d in w])\n        # after each epoch, try to save the current best model\n        if best_loss>loss:\n            model.save_weights(best_save, overwrite=True)\n            best_loss=loss\n    model_queue[gpu_id].close()\n\nif __name__=='__main__':\n    data=[]\n    label=[]\n    num_data=len(data)\n    gpu_list=['gpu{}'.format(i) for i in range(NUM_GPU)]\n    # for send the data\n    data_queue=multiprocessing.Queue(20)\n    # for synchronize the model, we create a queue for each model\n    model_queue={gpu_id:multiprocessing.Queue(2) for gpu_id in gpu_list}\n    threads=[multiprocessing.Process(target=train_model(gpu_id, data_queue, model_queue, num_batch=(num_data)/MINI_BATCH/NUM_GPU)) for gpu_id in gpu_list]\n    for thread in threads:\n        thread.start()\n    for epoch in range(NUM_EPOCH):\n        print 'data@epoch',epoch\n        for start in range(0,num_data,MINI_BATCH):\n            print 'data@batch',start/MINI_BATCH\n            data_queue.put((data[start:(start+MINI_BATCH)],label[start:(start+MINI_BATCH)]))\n    data_queue.close()\n    for thread in threads:\n        thread.join()
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "API thoughts on supporting non-sequential models", - "output": "

It is very convenient, I want to concatenate 2 branches convolutional layers' feature maps, the code like this:
\nmodel1 = Sequential()
\nmodel1.add(Convolution2D(2, 1, 5, 5, border_mode='valid'))
\nmodel1.add(Activation('tanh'))

\n

model1.add(Convolution2D(4, 2, 3, 3, border_mode='valid'))
\nmodel1.add(Activation('tanh'))
\nmodel1.add(MaxPooling2D(poolsize=(2, 2))) # get feature maps(num = 4,size = 11*11)

\n

model2 = Sequential()
\nmodel2.add(Convolution2D(4, 1, 7, 7, border_mode='valid'))
\nmodel2.add(Activation('tanh'))
\nmodel2.add(MaxPooling2D(poolsize=(2, 2)))# get feature maps(num = 4,size = 11*11)

\n

model = Sequential()
\nmodel.add(Merge([model1, model2], mode = 'concat')) # concatenate feature maps(num = 8,size = 11*11)

\n

model.add(Convolution2D(16, 8, 3, 3, border_mode='valid'))
\nmodel.add(Activation('tanh'))
\nmodel.add(MaxPooling2D(poolsize=(2, 2)))

\n

model.add(Flatten())
\nmodel.add(Dense(16_4_4, 128, init='normal'))
\nmodel.add(Activation('tanh'))

\n

model.add(Dense(128, 10, init='normal'))
\nmodel.add(Activation('softmax'))

\n

sgd = SGD(l2=0.0,lr=0.05, decay=1e-6, momentum=0.9, nesterov=True)
\nmodel.compile(loss='categorical_crossentropy', optimizer='sgd',class_mode=\"categorical\")
\nmodel.fit([data,data], label, batch_size=100, nb_epoch=10,shuffle=True,verbose=1,show_accuracy=True,validation_split=0.2)

\n

It can't work well. If I use the 'sum', no problem, like this:
\n....
\nmodel = Sequential()
\nmodel.add(Merge([model1, model2], mode = 'sum'))

\n

model.add(Convolution2D(16, 4, 3, 3, border_mode='valid'))
\nmodel.add(Activation('tanh'))
\nmodel.add(MaxPooling2D(poolsize=(2, 2)))
\n....

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "suggestion: save the training history in model.train_history_", - "output": "

Hi,
\nI implemented this wrapper class for tracking the training history.

\n
from keras.models import Sequential\nfrom keras.models import load_model\n\nfrom collections import defaultdict\n\nimport pickle\n\n\ndef _merge_dict(dict_list):\n    dd = defaultdict(list)    \n    for d in dict_list:\n        for key, value in d.items():\n            if not hasattr(value, '__iter__'):\n                value = (value,)\n            [dd[key].append(v) for v in value]\n    return dict(dd)\n\ndef save(obj, name):\n    try:\n        filename = open(name + \".pickle\",\"wb\")\n        pickle.dump(obj, filename)\n        filename.close()\n        return(True)\n    except:\n        return(False)\n\ndef load(name):\n    filename = open(name + \".pickle\",\"rb\")\n    obj = pickle.load(filename)\n    filename.close()    \n    return(obj)\n\ndef load_model_w(name):\n    model_k = load_model(name)\n    history = load(name)\n    model = Sequential_wrapper(model_k)\n    model.history = history\n    return(model)\n\nclass Sequential_wrapper():\n    \"\"\"\n    %s\n    \"\"\"%Sequential.__doc__\n    \n    def __init__(self, model=Sequential()):\n        self.history = {}\n        self.model = model\n        \n        # method shortcuts\n        methods = dir(self.model)\n        for method in methods:\n            if method.startswith('_'): continue\n            if method in ['model','fit','save']: continue\n            try:\n                exec('self.%s = self.model.%s' % (method,method))\n            except:\n                pass\n    \n    def _update_history(self,history):\n        if len(self.history)==0:\n            self.history = history\n        else:\n            self.history = _merge_dict([self.history,history])\n    \n    def fit(self, x, y, batch_size=32, epochs=10, verbose=1, callbacks=None,\n            validation_split=0.0, validation_data=None, shuffle=True,\n            class_weight=None, sample_weight=None,\n            initial_epoch=0, **kwargs):\n        \"\"\"\n        %s\n        \"\"\"%self.model.fit.__doc__\n        h = self.model.fit(x, y, batch_size, epochs, verbose, callbacks,\n                     validation_split, validation_data, shuffle,\n                     class_weight, sample_weight,\n                     initial_epoch, **kwargs)\n        self._update_history(h.history)\n        return h\n    \n    def save(self, filepath, overwrite=True):\n        \"\"\"\n        %s\n        \"\"\"%self.model.save.__doc__\n        save(self.history,filepath)\n        self.model.save(filepath, overwrite)
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "verbose=1 significantly slow down the training process.", - "output": "

What's your batch size?

\n

All that verbose=1 does is printing a log line after every batch. Printing does take a few milliseconds, so you don't want it to happen hundreds of times per second. In most cases the training time on each batch will be large enough that you won't see a difference. If the training time per batch is very small, you will see the difference, though.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Unable to install Keras to Anaconda3", - "output": "

Agreed that Keras should work with Anaconda. But in this case it sounds
\nlike an Anaconda issue, not a Keras issue. Keras does require h5py, so
\nthat line belongs there.

\n

On 3 December 2015 at 06:12, i5han notifications@github.com wrote:

\n
\n

I belive that keras should work for anaconda as well. Although i cant
\nprovide any hard numbers i think thats also a significant amount of users.
\nAsking those users to switch to winpython just use Keras is not the most
\nuser friendly solution and it also acts as a hinderence to spreading Keras
\n:)

\n

\u2014
\nReply to this email directly or view it on GitHub
\n#101 (comment).

\n
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Roadmap", - "output": "

Are there any plans for MultiNEAT/HyperNEAT type support? I've loved Keras, and the ability to utilize NEAT functionality inline would be tremendously useful.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "\"real time\" recurrent nets", - "output": "

I take it it can't really be done then. Safest bet is to just export the weights and reimplement the computation in python?

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Can i add a dropout after the input layer?", - "output": "

A note to future readers. Because we need to explicitly specify the size of the input we must use the \"input_shape\" parameter, ie. if before we used

\n

model = Sequential()
\nmodel.add(Dense(64, input_dim=20, init='uniform'))

\n

Now we would use

\n

model = Sequential()
\nmodel.add(Dropout(0.5, input_shape=(20,)))
\nmodel.add(Dense(64, init='uniform'))

\n

(This took me a while to figure out, since I haven't tried anything yet like convolutions which use the \"input_shape\" parameter, and since I hadn't read about the Abstract base layer class in the documentation, which Dropout inherits from.)

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Is it possible to merge two different input layers into one?", - "output": "

I am looking for a recommendation to merge vector information with CNN output into a regression model.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "l1, l2 regularization", - "output": "

The keyword syntax was broken, but it will be fixed/changed in PR #77

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "early stopping", - "output": "

What you can do is display training and testing loss and accuracy at every epoch (using the arguments validation_split, verbose, show_accuracy of the .fit method). Then you can decide for yourself what is an appropriate number of epochs.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Extract weight matrix", - "output": "

Sure. The method model.save_weights() will do it for you and store the weights to hdf5.

\n

If you want to do it manually, you'd do something like:

\n
for layer in model.layers:\n    weights = layer.get_weights() # list of numpy arrays
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Problem with return_sequences=True", - "output": "

So, while this now works in the current version (at least it runs, I haven't checked the results):

\n
model = Sequential()\nmodel.add(GRU(256, 256, return_sequences=True))\nmodel.compile(loss='mean_squared_error', optimizer='sgd')\n\ninz = np.random.randn(100,100,256)\nrez = model.predict_proba(inz) # 3d array\nmodel.fit(inz,rez)\n
\n

as soon as the number of samples and number of time steps are different, it does not work any more:

\n
model = Sequential()\nmodel.add(GRU(256, 256, return_sequences=True))\nmodel.compile(loss='mean_squared_error', optimizer='sgd')\n\ninz = np.random.randn(100,68,256)\nrez = model.predict_proba(inz) # 3d array\nmodel.fit(inz,rez)\n
\n
AssertionError                            Traceback (most recent call last)\n<ipython-input-89-5c6ce0d8a880> in <module>()\n----> 1 model.fit(inz,rez)\n\n/usr/local/lib/python2.7/site-packages/Keras-0.1.1-py2.7.egg/keras/models.pyc in fit(self, X, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, show_accuracy, class_weight)\n    205                     batch_logs['accuracy'] = acc\n    206                 else:\n--> 207                     loss = self._train(*ins)\n    208                 batch_logs['loss'] = loss\n    209 \n...\nAssertionError: Theano Assert failed!\nApply node that caused the error: Assert(InplaceDimShuffle{1,0,2}.0, Elemwise{Composite{EQ(i0, ((i1 + i2) - Switch(LT(i3, (i1 + i2)), i4, (i1 + i2))))}}.0)\nInputs types: [TensorType(float64, 3D), TensorType(int8, scalar)]\nInputs shapes: [(68, 100, 256), ()]\nInputs strides: [(2048, 139264, 8), ()]\nInputs values: ['not shown', array(0, dtype=int8)]\n
\n

Is this a bug, or am I missing something?

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "How to get the output of Conv layer and FC layer?", - "output": "

Hey, I think I can answer your questions.

\n
    \n
  1. In order to visualize the feature maps, just use the get_weights method of each layer and plot it using matplotlib. I have a working implementation of the same, and hope I'll be able to generalize it soon and create a PR.
  2. \n
  3. For this, the best method that works for me is to create 2 models. Train the 1st model with your labels. The 2nd model is identical to the 1st except, it does not contain the last (or all fully connected) layer (don't forget to flatten). Using get_weights method above, get the weights of the 1st model and using set_weights assign it to the 2nd model. Then use predict_proba to get the feature vectors.
  4. \n
\n

Hope it helps.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "importing package fails", - "output": "

I think I have figured this out. NVM

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Activation penalties", - "output": "

This is definitely something that would be nice to have, but to add it we will have to find a truly modular, transparent way to make it fit in. I am pretty sure this is possible, but it might take a few days.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Connecting one layer with two other layers", - "output": "

Now possible through the use of the Merge layer: http://keras.io/layers/core/#merge

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "LSTM - Sequences with different num of time steps", - "output": "

There are two simple and most often implemented ways of handling this:

\n
    \n
  1. Bucketing and Padding\n
      \n
    1. Separate input sample into buckets that have similar length, ideally such that each bucket has a number of samples that is a multiple of the mini-batch size
    2. \n
    3. For each bucket, pad the samples to the length of the longest sample in that bucket with a neutral number. 0's are frequent, but for something like speech data, a representation of silence is used which is often not zeros (e.g. the FFT of a silent portion of audio is used as a neutral padding).
    4. \n
    \n
  2. \n
  3. Bucketing\n
      \n
    1. Separate input samples into buckets of exactly the same length\n
        \n
      • removes the need for determining what a neutral padding is
      • \n
      • however, the size of the buckets in this case will frequently not be a multiple of the mini-batch size, so in each epoch, multiple times the updates will not be based on a full mini-batch.
      • \n
      \n
    2. \n
    \n
  4. \n
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Fix batch normalization during test time", - "output": "

@fchollet This looks like a really old bug that has been fixed. Is this true?

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "any plan to add the support of maxout", - "output": "

I think this can be closed as per #134 ?

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Reconfiguring a model after training", - "output": "

At first I want to say I'm sorry that I post this here, but I could not find a solution to my problem on the net and this seems to be the best place for it.

\n

Let's say I want to train a GRU and because I need stateful=true the batch-size has to be known beforehand and the timesteps also have to be known in advance

\n

Using the functional API I would have an Input as follows:

\n
input_1 = Input(batch_shape=(batch_size, time_steps, features))
\n

But when I evaluate the model I don't want to pass my test data in batches with fixed timesteps. My
\nsolution at the moment is to load the saved model and rebuild it with:

\n
input_1 = Input(shape=(None, num_input_dim))
\n

To do that though I need a method that goes through every layer of the model and then
\nset the weights afterwards.

\n
        input_1 = Input(shape=(None, num_input_dim))\n        x1 = input_1\n        weights = []\n        for l in range(0, len(layers)):\n            if isinstance(layers[l], keras.layers.GRU):\n                x1 = GRU(layers[l].output_shape[-1], return_sequences=True)(x1)\n                weights.append(layers[l].get_weights())\n            elif isinstance(layers[l], keras.layers.Dense):\n                x1 = Dense(layers[l].output_shape[-1], activation='tanh')(x1)\n                weights.append(layers[l].get_weights())\n            else:\n                continue
\n

(This is just an example and I find this solution very unelegant.)

\n

There must be a better way to redefine the input shape. Can somebody help me out here
\nplease.

\n
\n

From the discussion before I take it that I do not have to redefine the layers to

\n
stateful = False
\n

for testing purposes.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "no pip yet?", - "output": "

You can install it with pip using the following command:

\n

pip install git+git://github.com/fchollet/keras.git

\n

The library is not available at PyPI yet as it is still alpha. We should probably think about releasing it on PyPI after we reach a stable version.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Move regularizers to layer definitions?", - "output": "

I created a pull request with a working implementation. Let me know what you think.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Batch Embedding", - "output": "

Nevermind, I figured it out eventually: moving from embedding/feedforward/recurrent requires a lot of reshape and flatten inbetween.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Rename Time Distributed dense/softmax", - "output": "

I personally have no opinion. Maybe \"time distributed\" is less ambiguous, as \"distributed\" implied that you are doing the same thing over different timesteps, rather than doing a temporal operation (like a temporal convolution, for instance).
\nBut maybe that's also clear with \"temporal\".

\n

It would be interesting to gather a few opinions from users, about which name sounds more natural and explicit.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "__init__ methods in layers/embeddings.py make reference to argument that is not in the argument list", - "output": "

Just fixed this.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Initiate a ToDo List ", - "output": "

+1 for tagging issues with a few milestones. Knowing what's mission critical before a v1 / PyPI release will help make sure development effort is focused in the right direction.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "How can I get hidden layer representation of the given data?", - "output": "

One simple way to do it is to use the weights of your model to build a new model that's truncated at the layer you want to read. Then you can run the ._predict(X_batch) method to get the activations for a batch of inputs.

\n

Example:

\n
# this is your initial model\nmodel = Sequential()\nmodel.add(Dense(20, 64, init='uniform'))\nmodel.add(Activation('tanh'))\nmodel.add(Dense(64, 1, init='uniform'))\nmodel.add(Activation('softmax'))\n\n# we train it\nmodel.compile(loss='mse', optimizer='sgd')\nmodel.fit(X_train, y_train, nb_epoch=20, batch_size=16)\n\n# we build a new model with the activations of the old model\n# this model is truncated after the first layer\nmodel2 = Sequential()\nmodel2.add(Dense(20, 64, weights=model.layers[0].get_weights()))\nmodel2.add(Activation('tanh'))\n\nactivations = model2._predict(X_batch)
\n

Note: I haven't tested it.

\n

Another way to do it would be to define a Theano function to get the layer's output:

\n
import theano\nget_activations = theano.function([model.layers[0].input], model.layers[1].output(train=False), allow_input_downcast=True)\nactivations = get_activations(X_batch) # same result as above
\n

Note: also untested.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Recurrent Models with sequences of mixed length", - "output": "

In addition, here are a few quick examples of solutions to your problem:

\n

Zero-padding

\n
X = keras.preprocessing.sequence.pad_sequences(sequences, maxlen=100)\nmodel.fit(X, y, batch_size=32, nb_epoch=10)
\n

Batches of size 1

\n
for seq, label in zip(sequences, y):\n   model.train(np.array([seq]), [label])
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "New datasets and application examples", - "output": "

As @morenoh149 says, this issue is resolved by the CONTRIBUTING.md.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "How to use the pretrained model such as imagenet-vgg-f?", - "output": "

Any progress on this?

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "How can I compute a meaningful clip_norm threshold for my particular Network?", - "output": "

Plot some gradient histograms of the concatenated values all variables if you want to set this threshold for all gradients at once; or for each variable separately. That will give you a good idea how to set the clipping parameter.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "How to save and load model?", - "output": "

So far I've been using cPickle to save/load models, but I'm looking into writing a fast & compact save/load function based on HDF5. Not very high on the priority list though...

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Model training diverges after some level ?", - "output": "

yeah perfectly rational then it should be corrected.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "SimpleRNN Error", - "output": "

Awesome, thanks for the update.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "preprocessing utils would greatly benefit from sklearn", - "output": "
from sklearn.feature_extraction.text import CountVectorizer\nvectorizer = CountVectorizer(max_features=max_words,binary=True)\nvectorizer.fit(phrases)\nsparse = vectorizer.transform(phrases)\nX=[row.indices for row in sparse] \n# ^^ sparse matrix to sequences\n
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Does this support validation set while training to see generalization bound of the model?", - "output": "

Sure! To monitor train loss and test loss after each batch, you could use:

\n
for e in range(nb_epoch):\n   for batch_index in range(0, len(X_train)/batch_size+1):\n      batch = range(batch_index*batch_size, min(len(X_train), (batch_index+1)*batch_size))\n      train_loss = model.train(X_train[batch], y_train[batch])\n      test_loss = model.evaluate(X_test, y_test, batch_size=batch_size)
\n

Or alternatively, monitor train loss and test loss at each epoch (more practical):

\n
for e in range(nb_epoch):\n    train_loss = model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=1)\n    test_loss = model.evaluate(X_test, y_test, batch_size=batch_size)
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Which version of Python is used?", - "output": "

Keras was written with Python 2.7 in mind. But I would expect 2to3 to be enough for use with Python 3.

\n

I'll update the README.md with the Python version.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Is there an example to show feature extraction?", - "output": "

how can I save the features of the last fully connected layer (before the softmak layer) of the trained model?

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Preserving order - Shuffling in flow of ImageDataGenerator", - "output": "

Awesome thanks!

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Place documentation for objects in docstrings in source code", - "output": "

Since documentation seems to be coming up frequently in the issues lately, maybe this would be worth thinking about, @fchollet ? If you think it is a path you want to take, let me know and I will start a branch for the transistion.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "About the dropout", - "output": "

Yes, for all intents and purposes you can consider that Dropout is per-sample. The Dropout mask is a tensor that is random over every dimension.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "example code error(keras/examples/reuters_mlp.py)", - "output": "

Thanks, this is fixed.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "bidirectional rnn", - "output": "

Thanks, I have a created a bidirectional LSTM with mlp layers for anyone who might be interested to use
\nhttps://github.com/hycis/bidirectional_RNN.git
\nI will close the issue.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Fix model.fit() multiple output", - "output": "

Hi @fchollet and @tristandeleu,
\nTo copy a list, you can just do self.callbacks = list(callbacks). No need for the list comprehension here.
\nHowever, it would be better to use None instead. See here: http://docs.python-guide.org/en/latest/writing/gotchas/#mutable-default-arguments

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Feedback from callbacks", - "output": "

Closing this, discussion in #219

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Add a data loading example for RNNs", - "output": "

Here's what you would do as input to a LSTM:

\n
import numpy as np\n# 2 samples with 3 timesteps and 1 dimension\nsamples = [[[1], [0], [1]], [[1], [1], [0]]]\nX = np.array(samples)\nprint X.shape\n\n# alternatively\nsamples = [[1, 0, 1], [1, 1, 0]]\nX = np.array(samples)\nX = np.reshape(X, (len(samples), 3, 1))\nprint X.shape
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Compile error while using Embedding and Merge together", - "output": "

Having the same issue, I'm using Graph to address this but it isn't very clean.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Implement stack-based RNN layers", - "output": "

It would quite similar to the existing layers in layers/recurrent.py. Instead of a single step function you'd have 3 helper functions, each for one of the 3 equations of paragraph 3.1 in the paper (separation is cleaner).

\n

You maintain a stack matrix (for performance reasons you might want to have it be of fixed-size, with a separate integer parameter indicating the position of the \"latest\" vector in the matrix). Iterating over the matrix would be done with Theano scan.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "error while using save_weights()", - "output": "

Thank you, tdhd.

\n

I will try to upgrade my package and specify overwrite=True. :)

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "How to specify the number of CPU cores used?", - "output": "

In a cursory search, I could find nothing in Theano about setting the core count, and I was not expecting to.

\n

Theano, and deep learning, is all about parallelism: do everything you can do in parallel. Limiting yourself to a subset of the things you can do in parallel, intentionally, goes against everything we hold dear. Personally, I'd give a finger and three toes to be able to run everything I wanted to in parallel; unlimited money can buy replacement toes but not additional time.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "NaN when accuracy reaches 1 with logistic loss", - "output": "

I'm getting this behaviour just by running the mnist_cnn.py example.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Add a character-based RNN example.", - "output": "

Here is a code sample. This code divide a long character string to chunks of 200 characters, and it learns a model for the next character given the previous ones. At the end it inefficiently generates 128 sentences, each of 200 chars.

\n
import numpy\nimport sys\nsys.path.append('/home/USER/python/keras/')\n\n# Obtain the corpus of character sequence to train from.\n# Here it is just the sequence 123456789 repeated 100000 times.\nx = \"123456789\"*100000\n\n# Construct a dictionary, and the reverse dictionary for the participating chars.\n# '*\" is a 'start-sequence' character.\ndct = ['*'] + list(set(x))\nmax_features = len(dct)\nrev_dct = [(j, i) for i, j in enumerate(dct)]\nrev_dct = dict(rev_dct)\n\n# Convert the characters to their dct indexes. \nx = [rev_dct[ch] for ch in x]\n\n# Divide the corpuse to substrings of length 200.\nn_timestamps = 200\nx = x[:len(x)- len(x) % n_timestamps]\nx = numpy.array(x, dtype='int32').reshape((-1, n_timestamps))\n\n# Generate input and ouput per substring, as an indicator matrix.\ny = numpy.zeros((x.shape[0], x.shape[1], max_features), dtype='int32')\nfor i in numpy.arange(x.shape[0]):\n    for j in numpy.arange(x.shape[1]):\n        y[i, j, x[i, j]] = 1        \n\n# Shift-1 the input sequences to the right, and make them start with '*'.\nx = numpy.roll(y, 1, axis=1)\nx[:, 0, :] = 0\nx[:, 0, 0] = 1\n\n# Build the model.\nfrom keras.models import Sequential\nfrom keras.layers.core import TimeDistributedDense, Dropout, Activation\nfrom keras.layers.recurrent import LSTM\n\nmodel = Sequential()\nmodel.add(LSTM(max_features, 256, return_sequences=True))\nmodel.add(LSTM(256, 256, return_sequences=True))\nmodel.add(LSTM(256, 256, return_sequences=True))\nmodel.add(TimeDistributedDense(256, max_features))\nmodel.add(Activation('time_distributed_softmax'))\n\nmodel.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n\nmodel.fit(x, y, batch_size=64, nb_epoch=50)\n\n# Sample 128 sentences (200 characters each) from model.\n\ndef mnrnd(probs):\n    rnd = numpy.random.random()\n    for i in xrange(len(probs)):\n        rnd -= probs[i]\n        if rnd <= 0:\n            return i\n    return i\n\nsentences = numpy.zeros((128, n_timestamps+1, max_features))\nsentences[:, 0, 0] = 1\n\n# Start sampling char-sequences. At each iteration i the probability over\n# the i-th character of each sequences is computed. \nfor i in numpy.arange(n_timestamps):\n    probs = model.predict_proba(sentences)[:,i,:]\n    # Go over each sequence and sample the i-th character.\n    for j in numpy.arange(len(sentences)):\n        sentences[j, i+1, mnrnd(probs[j, :])] = 1\nsentences = [sentence[1:].nonzero()[1] for sentence in sentences]\n\n# Convert to readable text.\ntext = []\nfor sentence in sentences:\n    text.append(''.join([dct[word] for word in sentence]))\n
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Sequence of floats as an input to LSTM", - "output": "

So where's time in your problem? You're talking about 'sequences', which implies a time axis.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Cannot pass index array to categorical_crossentropy objective", - "output": "

I think this is because Sequential and Graph obtain use get_output() of the last layer to get the shape/type of the variable. If I understand correctly, Theano picks which variation of categorical_crossentropy you're using at compile time based on the predeclared shapes.

\n

This could be alleviated by having an option which lets you explicitly declare the shape/type of Sequential.y, but it would be messier for Graph.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "test_models.py fails", - "output": "

Thank you for pointing this out! It's fixed now. A screw-up on my part.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Host documentation on readthedocs.org", - "output": "

Hi,
\nIt works with this docker image : https://hub.docker.com/r/fiware/md2pdf/.
\nI generated the documentation, with Keras==2.0.8.
\nPDF : https://drive.google.com/open?id=0BxaH5r4iW8cWeEdEMG1lQ0lIMFk !

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Can I do joint layer training in Keras?", - "output": "

Sure, you'd need to use a Merge layer. Here's the corresponding doc, including a code snippet similar to what you want to do: http://keras.io/layers/core/#merge

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Using text.tokenizer for a CSV File", - "output": "

Note that the Keras tokenizer only generates 1-grams, not n-grams. This might not be what you want.

\n

It is possible to pass a CSV file to a method such as tokenizer.fit_on_texts(csv) or tokenizer.texts_to_matrix(csv) (the tokenizer can handle any iterator over strings). The exact configuration would depend on your CSV file...

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "what if my data can note load into memory?", - "output": "

@tleeuwenburg I found that caffe use lmdb. i think i can read a batch from lmdb can train the model batch by batch. i will try it out. thank you

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Add h5py to the requires list", - "output": "

I want to load vgg16 weights in keras but I got \"load_weights requires h5py\"!! I installed h5py and imported that. What should I do?

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Imbalanced classes", - "output": "

Closing this, discussion in #188

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Masks for RNNs", - "output": "

@mbchang in general after this discussion Keras ended up moving to a separate explicitly sent mask after all, rather than a special masking value.

\n

Embeddings take a mask_zero boolean parameter which can generate that mask automatically anywhere there's a 0 on the input.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Convolution2D does not allow for weight regularization", - "output": "

Yes, adding it should not be an issue. I'll put it on the backlog.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Merge two CNNs has the Error:\"TypeError: 'int' object is not iterable\"", - "output": "

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "[help] Constructing a synced sequence input and output RNN ", - "output": "

No, TimeDistrubutedDense is exactly as it sounds, simply a Dense layer that feed all of its inputs forward in time; this distinction between Dense and TimeDistributedDense is simply that a Dense layer expects 2D input (batch_size, sample_size) whereas TimeDistributedDense expects 3D input (Batch_size, time_steps, sample_size). This should be used in conjunction with TimeDistributedSoftmax for the same reason (2D vs. 3D expected input).

\n

There is a GRU layer, however: https://github.com/fchollet/keras/blob/master/keras/layers/recurrent.py#L156-253

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Reinforcement Learning", - "output": "

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Loading in data", - "output": "

this worked. thanks for the info!

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Unable to Install on Mac OS", - "output": "

I installed hdf5 from source, installed h5py (pip install h5py).
\nreinstalled python, numpy, scipy...etc. and then it finally worked.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Predicting sequence with recurrent output layer does not work", - "output": "

That was quick, thanks!

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "A bit of a problem with convolutional layers", - "output": "

got the point so have to change that layer to following (as 256*0.25=64)

\n
model.add(Flatten())\nmodel.add(Dense(64*64*64, 512))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.5))\n
\n

thanks pranv

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "The validation loss and accuracy", - "output": "

https://en.wikipedia.org/wiki/Overfitting

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Stacking multiple LSTM layers yields an error", - "output": "

A LSTM layer, as per the docs, will return the last vector by default rather than the entire sequence. In order to return the entire sequence (which is necessary to be able to stack LSTM), use the constructor argument return_sequences=True.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Error when using TimeDistributedDense", - "output": "

Thanks -- I'll make sure to change that.

\n

Along that line, though, there's a snippet in the docs that made me shy away from categorical crossentropy:

\n
\n

Note: using this objective requires that your labels are binary arrays of shape (nb_samples, nb_classes).

\n
\n

I interpreted that as meaning it wouldn't work with the (nb_samples, nb_timeseps, nb_classes) output from TimeDistributedDense -- it seems from messing around with it, and from other code snippets I've seen, that it actually works just fine.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "dimension mismatch?", - "output": "", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Merge fails on GPU but passes on CPU", - "output": "

Correct, it was a Theano bug with T.Concatenate(). It has been fixed in the dev version of Theano a few weeks ago, and it will be on Pypi soon. Not worth fixing on the Keras side...

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Max margin objectives", - "output": "

I have a dataset with positive and negative samples. how can I train and test with max margin objective in keras? the dataset is as following:

\n
\n

sentA, CategoryA ,+
\nsentA, CategoryB,+
\nsentA, CategoryC,-

\n
\n

and so on
\nDo i need to update per sample?
\nThanks

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Thoughts on adding a scikit-learn compatible API?", - "output": "

There is a PR for this functionality here:
\nhttps://github.com/fchollet/keras/pull/1225/files

\n

It wasn't merged because it looks to me like a big hack. If you think of a
\ncleaner, simpler way to do it, please submit a new PR.

\n

On 11 February 2016 at 11:52, Gustavo De Mari Pereira <
\nnotifications@github.com> wrote:

\n
\n

+1 on @mbatchkarov https://github.com/mbatchkarov comment

\n

\u2014
\nReply to this email directly or view it on GitHub
\n#149 (comment).

\n
", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Multiple Inputs", - "output": "

The code snippet example in the doc page I linked provides all the info you need.

\n

You will train your model with list of inputs:

\n
model.fit([X_CNN1, X_CNN2], y) 
\n

Which will feed at time t X_CNN1[t] to CNN1 and X_CNN2[t] to CNN2.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Variable size sequence RNN input", - "output": "

@ssamot Will you reference the other comment for everyone else's convenience? Thanks!

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Feeding training data in batches", - "output": "

@fchollet You just mentioned that passing nb_epoch=1 to the function model.fit(X, y) will resolve the memory issue, which makes me confused. Why? The training data (X, y) still needs to be loaded into memory previously at once, right ?

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Python keras package returns an error about data dimensions", - "output": "

X and Y are indexable arrays. There are examples in /keras/keras/datasets/imdb.py in load_data().

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Regularization is throwing exception", - "output": "

incorrectly used regularizer for constraint.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Out of memory error", - "output": "

Error allocating 3242496000 bytes of device memory (out of memory). Driver report 876146688 bytes free and 4294246400 bytes total

\n

This tells you everything you need to know. Your GPU does not have enough memory for this task.

\n

Things you can try:

\n\n

Alternative solutions...

\n\n
\n

Obviously a larger vocab means a larger embed layer.

\n
\n

Yes.

\n
\n

Do longer sentences mean a bigger network?

\n
\n

No, the network size will be the same, but each sample will be larger therefore you will be using more memory to load each batch.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Parallel Model", - "output": "

Resolved by the Graph model.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "60 secondes by epoch (cifar10_cnn.py example)", - "output": "

Yes, that's pretty much standard.

\n

If you want faster learning, you can try to augment the batch size, reduce the size of the network, or experiment with different optimizers (SGD should be fairly fast though, at least in time/epoch, not necessarily in convergence speed).

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "LSTM model compile time is very high", - "output": "

Can you delete your theano temp dir? The best would be if you could try to reproduce on another machine, e.g. Google Cloud or AWS.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "[Question] Loading images on disk for training", - "output": "

@davidlis did you ever find examples of how to do this?

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "model.fit(shuffle=False) gives TypeError", - "output": "

Good catch : ) This is fixed now.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "General questions", - "output": "

I am very interested in word/sentence/text embedding, so there will definitely be more to come in that area. In the mean time, I suggest you check out Gensim.

\n

Anything else planned: yes, I'm also interested in memory networks and their application to text understanding and question answering. I will most likely add Keras support for memory networks in the near future.

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "Issues loading sub-modules", - "output": "

Yes thanks!

", - "input": "", - "system": "you are the github assistant" - }, - { - "instruction": "pooling size > stride", - "output": "

No idea about that. I think they were working on that also, but I wouldn't know since I always run my code on the GPU anyway :)

", - "input": "", - "system": "you are the github assistant" - } -] \ No newline at end of file