Du befindest dich hier: FSI Informatik » Prüfungsfragen und Altklausuren » Hauptstudiumsprüfungen » Lehrstuhl 4 » bs-2042-03-01 (Übersicht)
Unterschiede
Hier werden die Unterschiede zwischen zwei Versionen der Seite angezeigt.
Nächste Überarbeitung | Vorherige Überarbeitung | ||
pruefungen:hauptstudium:ls4:bs-2042-03-01 [20.08.2019 20:24] – angelegt ho23noji | pruefungen:hauptstudium:ls4:bs-2042-03-01 [20.08.2019 20:33] (aktuell) – gelöscht ho23noji | ||
---|---|---|---|
Zeile 1: | Zeile 1: | ||
- | Grade: 1.0 | ||
- | The exam started off with Prof. Riess asking to give an overview of all topics in the course that we discussed. | ||
- | Q. He asked me about density estimation and what the idea behind that was. | ||
- | A. I started with what the goal behind density estimation is, the approaches we could follow for the task (parametric, | ||
- | Q. Yeah the bare bones Parzen window estimate. Could you detail on that perhaps. | ||
- | A. I started once again with the idea behind using a kernel function to estimate the data, and briefly discussed how it's formulated by writing a bunch of formulas. | ||
- | Q. Great. So we implemented something like this in the first exercise where we were trying to visualize the results of using this approach. Say you have a friend that sends you an output image of a raccoon but you don't know whether they used a Gaussian or a box kernel. Is there a way to know this? | ||
- | He then drew a 1D feature space such that it had samples on the x-axis. | ||
- | A. So I assume that with a Gaussian kernel, you'd have a smoother output visually but you might get block artifacts from a box kernel. | ||
- | He was clearly looking for something more, and gave me a bit of time with it, asking me what ' | ||
- | Q. So we also discussed clustering. What is the idea behind that? | ||
- | A. I explained what the goal with a clustering algorithm is, and then briefly touched upon the two algorithms that we generally use (K-means and a GMM). | ||
- | Q. We also discussed another algorithm in class that isn't a conventional clustering algorithm but can be used to perform the task. | ||
- | A. Yes, we talked about the mean shift algorithm. I explained the idea behind the mean shift (how we are looking for modes of density), and how we can get to these modes by looking for the zero gradients of the density. I then explained how the mean shift works for a clustering algorithm. | ||
- | Q. Say your friend sends you results from another project he is working on, and its basically an image of a dataset where some clustering has been performed. But your friend isn't too talkative and you want to find out how he might have arrived at the clusters. | ||
- | A. I drew two cases where a conventional k-means algorithm doesn' | ||
- | Q. I would like to move on to a probabilistic graphical model. But before we go into details, tell me how we tackle the probability space in these problems? He wrote down a joint probability density function (similar to the lecture), with N input observations and M output observations. | ||
- | A. I remarked how using the entire combinations leads to an NxM probabilistic space which perhaps is impossible to model. So we use the Naive Bayes assumption. I then wrote a bunch of formulas detailing how the original space could be broken down with this assumption and the model becomes tractable. Also drew the probabilistic graph for Naive Bayes. | ||
- | Q. Great, but the Naive Bayes is just a little naive. Could you list some potential application-specific problems, and how we address them? | ||
- | A. For data where sequence of events is important (listed an example from a reference: words and parts-of-speech as labels, the sequence information is important). We can use a Hidden Markov Model, wrote a few formulas to show how the Naive Bayes could be adapted for HMM. | ||
- | Q. Could you draw a HMM graph? | ||
- | A. Listed the elements of HMM, drew the graph, explained what a left-right HMM is. | ||
- | Q. Great, we also have some tasks in HMM and specific algorithms for those tasks. | ||
- | A. Explained Inference and Training, and the tasks and algorithms in each. | ||
- | Q. Could you explain what happens in the training phase? You don't need to write the formulas. | ||
- | A. Gave the algorithm name, its an iterative expectation maximization approach. Explained what we do in expectation, | ||
- | Q. So the solution isn't globally optimal? | ||
- | A. I said it wasn' | ||
- | |||
- | Environment: | ||
- | |||
- | Preparation: |