Advanced Deep Learning
Advanced Deep Learning
Sequence Models
1.Which tool is NOT Suited for building ANN models
Ans:Excel
Neural Networks & Deep Learning
2. “Convolutional Neural Networks can perform various types of transformation (rotations or scaling) in an input”. Is the statement correct True or False?
Ans: False
Convolutional Neural Networks
3.CNN is mostly used when there is a/an?
Ans:Unstructured data
Sequence Models
4.How deep learning models are built on Keras
Ans:by using sequential models
Convolutional Neural Networks
5.Which of the following activation function can not be used in the output layer of an image classification model?
Ans:ReLu
Neural Networks & Deep Learning
6.ReLU activation function outputs zero when:
Ans:Input is less than or equal to zero
Sequence Models
7.What is generally the sequence followed when building a neural network architecture for semantic segmentation for image?
Ans:Convolutional network on input and deconvolutional network on output
8. A tensor is similar to
Ans:Data Array
9.How do we perform caculations in TensorFlow?
Ans:We launch the computational graph in a
session
Convolutional Neural Networks
10.Which of the following allows the output to have the same height and width as those of the input?
Ans:Same padding
Zero padding
Neural Networks & Deep Learning
11.Which of the following is/are Limitations of deep learning?
Ans:Both A and B
Convolutional Neural Networks
12.Autoencoder is an example of Deep Learning.
Ans:True
Neural Networks & Deep Learning
13.All the neurons in a convolution layer have different Weights and Biases.
Ans:True
14.If you increase the number of hidden layers in a Multi Layer Perceptron, the classification error of test data does not always decreases
Ans:True
Sequence Models
15.Can a neural network model the characteristic (y=1/x) in TensorFlow?
Ans:Yes
Neural Networks & Deep Learning
16.The deeper layers of the network compute complex features.
Ans:True
Sequence Models
17.Which of the subsequent declaration(s) effectively represents an actual neuron in TensorFlow?
Ans:All of the above statements are valid
Neural Networks & Deep Learning
18.Suppose that you have to minimize the cost function by changing the parameters. Which of the following technique could be used for this?
Ans:Any of these
Sequence Models
19.out=tf.sigmoid(tf.add(tf.matmul(X,W), b))
Ans:Logistic Regression Equaltion
20.How calculations work in TensorFlow
Ans:Through Computational Graphs
Convolutional Neural Networks
21.The number of nodes in the input layer is 10 and the hidden layer is 5. The maximum number of connections from the input layer to the hidden layer are
Ans:50
Neural Networks & Deep Learning
22.In a neural network, knowing the weight and bias of each neuron is the most important step. If you can somehow get the correct value of weight and bias for each neuron, you can approximate any function. What would be the best way to approach this?
Ans:Iteratively check that after assigning a value how far you are from the best values, and slightly change the assigned values values to make them better
Sequence Models
23.Exploding gradient problem is an issue in training deep networks where the gradient getS so large that the loss goes to an infinitely high value and then explodes. What is the probable approach when dealing with “Exploding Gradient” problem in RNNs?
Ans:Gradient clipping
Convolutional Neural Networks
24.Suppose you have an input volume of dimension 64x64x16. How many parameters would a single 1x1 convolutional filter have (including the bias)?
Ans:17
25.Convolution Neural Network needs more memory than DNNs
Ans:False
26.CNNs allows the network to concentrate on low-level features in the first hidden layer, then assemble them into higher-level features in the next hidden layer, and so on.
Ans:True
Neural Networks & Deep Learning
27.Gradient at a given layer is the product of all gradients at the previous layers.
Ans:True
Convolutional Neural Networks
28.CNN don’t use back-propogation during inference.
Ans:False
Neural Networks & Deep Learning
29.A _________________ matches or surpasses the output of an individual neuron to a visual stimuli.
Ans:Convolution
Convolutional Neural Networks
30.You have an input volume that is 63x63x16, and convolve it with 32 filters that are each 7x7, and stride of 1. You want to use a “same” convolution. What is the padding?
Ans:3
Neural Networks & Deep Learning
31.Which of the following is a correct order for the Convolutional Neural Network operation?
Ans:Convolution -> max pooling -> flattening -> full connection
32.Which of the following is FALSE about Neural Networks?
Ans:We can use different gradient descent algorithms in different epochs
Sequence Models
33.Mini-Batch sizes when defining a neural network are preferred to be multiple of 2’s such as 256 or 512. What is the reason behind it?
Ans:Parallelization of neural network is best when the memory is used optimally
34.Which of the subsequent declaration(s) effectively represents an actual neuron in TensorFlow?
Ans:All of the above statements are valid
35.A artificial neuron is so powerful that it can perform complex tasks by simply performing a linear combination of its inputs.
Ans:False
36.Kears is a deep learning framework on which tool
Ans:TensorFlow
37.In TensorFlow, knowing the weight and bias of each neuron is the maximum crucial step. If you could by some means get the best fee of weight and bias for each neuron, you may approximate any characteristic. What will be the first-class way to technique this?
Ans:Iteratively test that when assigning a value how a ways you are from the first-class values, and barely alternate the assigned values values to cause them to higher
38.Can we use GPU for faster computations in TensorFlow
Ans:Yes, possible
Neural Networks & Deep Learning
39.Batch normalization helps to prevent-
Ans:Both A and B
Convolutional Neural Networks
40.In a classification problem, which of the following activation function is most widely used in the output layer of neural networks?
Ans:Sigmoid function
Sequence Models
41.tf.reduce_sum(tf.square(out-Y))
Ans:Squared Error loss function
Convolutional Neural Networks
42.Which of the following methods DOES NOT prevent a model from overfitting to the training set?
Ans:Pooling
43.Suppose your input is a 300 by 300 color (RGB) image, and you use a convolutional layer with 100 filters that are each 5x5. How many parameters does this hidden layer have (without bias)?
Ans:7500
Neural Networks & Deep Learning
44.Which of the following is FALSE about sigmoid and tanh activation function?
Ans:These do not suffer from vanishing and exploding gradient problems unlike ReLU
Convolutional Neural Networks
45.Convolutional Neural Network is used in _________________.
Ans:Image classification
Text classification
Object Detection
46.Convolutional Neural Networks can perform various types of transformation (rotations or scaling) in an input.
Ans:False
47.The distance between two consecutive receptive fields is called the _________.
Ans:Stride
Sequence Models
48.Can we have multidimentional tensors
Ans:Yes possible
Convolutional Neural Networks
49.Which of the following functions can be used as an activation function in the output layer of CNN if we wish to predict the probabilities of n classes (p1, p2..pk) such that sum of p over all n equals to 1?
Ans:Softmax
50.How many types of layers are makes up the Convolutional Neural Network?
Ans:3
There are three types of layers that make up the CNN which are the convolutional layers, pooling layers, and fully-connected (FC) layers.
Neural Networks & Deep Learning
51.Which of the following is TRUE about Pooling Layer in CNN?
Ans:All of the above
Convolutional Neural Networks
52.In which neural net architecture, does weight sharing occur?
Ans:Convolutional neural Network
Recurrent Neural Network
Neural Networks & Deep Learning
53.In CNN, having max pooling always decrease the parameters.
Ans:False
54.In FeedForward ANN, information flow is _________.
Ans:unidirectional
Convolutional Neural Networks
55.Suppose you are applying a sliding windows classifier (non-convolutional implementation). Increasing the stride would tend to increase accuracy, but decrease computational cost.
Ans: False
Sequence Models
56.Suppose that you have to limit the value feature via converting the parameters. Which of the subsequent approach could be used for this in TensorFlow?
Ans:Any of those
57.Which tool is best suited for solving Deep Learning pro
blems
Ans:Tensorflow
Convolutional Neural Networks
58.The input image has been converted into a matrix of size 28 X 28 and a kernel/filter of size 7 X 7 with a stride of 1. What will be the size of the convoluted matrix?
Ans:22x22
59.The first layer of Deep learning is called the hidden Layer.
Ans:False
Neural Networks & Deep Learning
60.Which of the following steps can be taken to prevent overfitting in a neural network?
Ans:All of the above
Sequence Models
61.Which of the following is correct? 1.Dropout randomly masks the input weights to a neuron 2.Dropconnect randomly masks both input and output weights to a neuron
Ans:Both 1 and 2 are False
62.out=tf.add(tf.matmul(X,W), b)
Ans:Linear Regression equation
Neural Networks & Deep Learning
63.Which of the following statements is true when you use 1×1 convolutions in a CNN?
Ans:All of the above
64.What are the steps for using a gradient descent algorithm? 1.Calculate error between the actual value and the predicted value 2.Reiterate until you find the best weights of network 3.Pass an input through the network and get values from output layer 4.Initialize random weight and bias 5.Go to each neurons which contributes to the error and change its respective values to reduce the error
Ans:4, 3, 1, 5, 2
Sequence Models
65.Increase in size of a convolutional kernel would necessarily increase the performance of a convolutional neural network.
Ans:False
Convolutional Neural Networks
66.Zero Padding is also known as __________.
Ans:Same padding
Neural Networks & Deep Learning
67.Which of the following statement(s) correctly represents a real neuron?
Ans:All of the above statements are valid
68.Which of the following is true about model capacity?
Ans:As number of hidden layers increase, model capacity increases
69.Which of these are hyperparameters?
Ans:learning rate
Number of epochs
70.Which of the following is a hyperparameter in a neural network?
Ans:All of these
Convolutional Neural Networks
71.Why convolution neural network is taking off quickly in recent times
Ans:Access to large amount of digitized data
Integration of feature extraction within the training process
Sequence Models
72.How do you feed external data into placeholders?
Ans:by using feed_dict
Neural Networks & Deep Learning
73.The input image has been converted into a matrix of size 28 X 28 and a kernel/filter of size 7 X 7 with a stride of 1. What will be the size of the convoluted matrix?
Ans:22x22
Convolutional Neural Networks
74.Maxpools is used for?
Ans:Adding local invariance
Reducing dimensionality
Sequence Models
75.A recurrent neural network can be unfolded into a full-connected neural network with infinite length.
Ans:True
Comments
Post a Comment