Zusammenfassung der Ressource
Frage 1
Frage
Which of the following best describes the No Free Lunch Theorem in AI?
Antworten
-
AI algorithms perform better when trained with more data
-
No single algorithm is best for all problems
-
AI algorithms cannot surpass human intelligence
-
Training AI models requires computational optimization
Frage 2
Frage
What is catastrophic forgetting in neural networks?
Antworten
-
Overfitting to the training data
-
Deletion of weights during training
-
Loss of previously learned information when learning new tasks
-
Memory overflow during backpropagation
Frage 3
Frage
In Deep Q-Networks (DQN), what is the main purpose of the target network?
Antworten
-
To reduce training time
-
To explore the environment
-
To stabilize learning by fixing the Q-value targets
-
To store policy gradients
Frage 4
Frage
Which activation function is most prone to the dying neuron problem?
Frage 5
Frage
What does the attention mechanism do in Transformer models?
Antworten
-
A. Reduces model parameters
-
B. Assigns weights to input tokens based on relevance
-
C. Prevents overfitting
-
D. Compresses the input sequence
Frage 6
Frage
Which technique is commonly used to deal with the vanishing gradient problem in RNNs?
Antworten
-
Dropout
-
Gradient clipping
-
Batch normalization
-
Layer normalization
Frage 7
Frage
In game-playing AI, the Minimax algorithm assumes that:
Antworten
-
Both players make optimal decisions
-
Only one player uses a rational strategy
-
Players take random actions
-
The game is stochastic in nature
Frage 8
Frage
Which of the following models does not use the Markov property explicitly?
Antworten
-
Hidden Markov Model (HMM)
-
Q-learning
-
Recurrent Neural Network (RNN)
-
Markov Decision Process (MDP)
Frage 9
Frage
In transfer learning, which layer of a pre-trained network is most commonly replaced or fine-tuned for a new task?
Frage 10
Frage
Which of the following is not a valid assumption in the Naive Bayes classifier?
Antworten
-
Features are independent given the class
-
The model uses Bayes' theorem
-
Features are conditionally dependent on each other
-
The prior probabilities are used