.gitattributes CHANGED
@@ -59,6 +59,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
  data/dataset.json filter=lfs diff=lfs merge=lfs -text
61
 
62
- data/results.json filter=lfs diff=lfs merge=lfs -text
63
- data/github_dataset.json filter=lfs diff=lfs merge=lfs -text
64
- data/stackoverflow_dataset.json filter=lfs diff=lfs merge=lfs -text
 
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
  data/dataset.json filter=lfs diff=lfs merge=lfs -text
61
 
 
 
 
README.md CHANGED
@@ -1,86 +1,35 @@
1
- ---
2
- configs:
3
- - config_name: train
4
- data_files:
5
- - split: train
6
- path:
7
- - data/stackoverflow_dataset.json
8
- - data/github_dataset.json
9
- - config_name: test
10
- data_files:
11
- - split: test
12
- path:
13
- - data/testing_stackoverflow_dataset.json
14
- - data/testing_github_dataset.json
15
-
16
- dataset_info:
17
- - config_name: train
18
- features:
19
- - name: instruction
20
- dtype: string
21
- - name: input
22
- dtype: string
23
- - name: output
24
- dtype: string
25
- - name: system
26
- dtype: string
27
- configs:
28
- - config_name: default
29
- license: apache-2.0
30
- task_categories:
31
- - text-generation
32
- - question-answering
33
- language:
34
- - en
35
- tags:
36
- - eren5717
37
- - py_issues_en
38
- - dbl_issues
39
- size_categories:
40
- - 10K<n<100K
41
- questions_ids:
42
- - 275 lines
43
- data:
44
- - 1652 lines
45
- pretty_name: tensorflow,keras,pytorch
46
- ---
47
-
48
-
49
- # TensorFlow, PyTorch, and Keras Framework Dataset: StackOverflow & GitHub
50
-
51
- ## Dataset Overview
52
-
53
- This dataset contains a collection of questions, answers, and code snippets related to the **TensorFlow**, **PyTorch**, and **Keras** frameworks, sourced from **StackOverflow** and **GitHub** repositories. It provides a comprehensive resource for researchers, practitioners, and developers interested in analyzing and enhancing their understanding of these popular machine learning frameworks.
54
-
55
- ## Data Sources
56
-
57
- - **StackOverflow posts**: User-generated questions and answers focused on common issues, optimizations, and best practices in TensorFlow, PyTorch, and Keras.
58
- - **GitHub repositories**: Relevant code snippets, documentation, and discussions from open-source repositories that leverage these frameworks.
59
-
60
- ## Use Cases
61
-
62
- The dataset is designed to support various tasks such as:
63
- - **Natural Language Processing (NLP)**
64
- - **Sentiment analysis**
65
- - **Code summarization**
66
- - **Question-answering models**
67
-
68
- It is a valuable resource for AI research and development related to deep learning and machine learning frameworks.
69
-
70
- ## Data Structure
71
-
72
- - **StackOverflow Data**: Contains questions, answers, and comments. Each entry includes metadata such as post ID, user ID, score, and relevant tags.
73
- - **GitHub Data**: Includes code snippets, repository metadata, issues, and pull requests.
74
-
75
- ## License
76
-
77
- This dataset is made available for research and educational purposes. Please refer to the respective licenses of StackOverflow and GitHub for usage terms.
78
-
79
- ## Acknowledgements
80
-
81
- We would like to acknowledge StackOverflow and GitHub for providing the rich source of data that this dataset is based on.
82
-
83
- ## Citation
84
-
85
- If you use this dataset in your research, please cite it as follows:
86
-
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: instruction
5
+ dtype: string
6
+ - name: input
7
+ dtype: string
8
+ - name: output
9
+ dtype: string
10
+ - name: system
11
+ dtype: string
12
+
13
+ configs:
14
+ - config_name: default
15
+ data_files:
16
+ - split: train
17
+ path: "data/dataset.json"
18
+ - split: test
19
+ path: "data/dataset.json"
20
+
21
+ license: apache-2.0
22
+ task_categories:
23
+ - text-generation
24
+ - question-answering
25
+ language:
26
+ - en
27
+ tags:
28
+ - eren5717
29
+ size_categories:
30
+ - 10K<n<100K
31
+ questions_ids:
32
+ - 275 lines
33
+ data:
34
+ - 1652 lines
35
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/code.py DELETED
@@ -1,39 +0,0 @@
1
- import json
2
-
3
-
4
- # Specify the path to your JSON file
5
- file_path = 'github_dataset.json'
6
-
7
- def update_custom_key_in_json(file_path):
8
- try:
9
- # Open the JSON file and load the data
10
- with open(file_path, 'r') as file:
11
- data = json.load(file)
12
-
13
- # Check if the data is a list of dictionaries
14
- if isinstance(data, list):
15
- for item in data:
16
- if isinstance(item, dict) and 'output' in item:
17
- # Get the 'output' value and split it into words
18
- output_text = item['output']
19
- item['instruction'] = 'you are the github assistant'
20
-
21
- words = output_text.split()
22
-
23
- # Truncate the words to 2048 words if needed
24
- if len(words) > 2048:
25
- item['output'] = ' '.join(words[:2048])
26
-
27
- # Save the updated data back to the JSON file
28
- with open(file_path, 'w') as file:
29
- json.dump(data, file, indent=4)
30
-
31
- print("Output truncated to 2048 words successfully!")
32
- else:
33
- print("The JSON file doesn't contain a list of dictionaries.")
34
-
35
- except Exception as e:
36
- print(f"An error occurred: {e}")
37
-
38
- # Call the function to update the custom key
39
- update_custom_key_in_json(file_path)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/{github_dataset.json → dataset.json} RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c5606037c9fcc909c6455c71f908ffa602c57a392cc641b2887d8c77afe8b59f
3
- size 19756965
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45839dd432030c63bdc3ddac6b26f0c5f649437d9015c646269f25dc70d8e40a
3
+ size 13478098
data/stackoverflow_dataset.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:59ab2e78623f3e433f231cb91c25b6199a72799c4a33dbda4cc9fa9b1c835826
3
- size 17636383
 
 
 
 
data/testing_github_dataset.json DELETED
The diff for this file is too large to render. See raw diff
 
data/testing_stackoverflow_dataset.json DELETED
@@ -1,302 +0,0 @@
1
- [
2
- {
3
- "instruction": "Cross detection in image",
4
- "input": "",
5
- "output": "<p>Simple solution: if you know a-priori locations of all boxes, just calculate the average brightness of the box. Boxes with a mark will be much darker than empty boxes.</p>\n",
6
- "system": "you are the stackoverflow assistant"
7
- },
8
- {
9
- "instruction": "kdtree vs histogram encoding",
10
- "input": "",
11
- "output": "<p>Kd-Tree is a data structure that contains a set of items. It allows a fast search for the K nearest matches of a query item. </p>\n\n<p>As far as the object recognition problem is concerned, Kd-Tree is not a necessary component. It only serves the purpose of improving the runtime of matching individual features.</p>\n\n<p>The histogram approach does not require matching individual features at all. Instead, you quantize the features of a query image. Then, you compute a histogram of the quantized features. In order to find a matching database image, you look for the most similar histogram in the database.</p>\n\n<p>Since no matching of individual features is required, the histogram approach runs asymptotically faster.</p>\n\n<p>An important difference between the two approaches is that the histogram representation is ignorant of the spatial coordinates of the keypoints for which the feature vectors were computed. Thus, it has less discriminative power.</p>\n",
12
- "system": "you are the stackoverflow assistant"
13
- },
14
- {
15
- "instruction": "What does gist of a texture really mean?",
16
- "input": "",
17
- "output": "<p>Have you read <a href=\"http://cvcl.mit.edu/papers/oliva04.pdf\" rel=\"noreferrer\">the original paper</a>? There's probably not a much better way of explaining the gist of the gist (so to speak) than reading the papers themselves.</p>\n",
18
- "system": "you are the stackoverflow assistant"
19
- },
20
- {
21
- "instruction": "What does it mean when cv::minMaxLoc returns the point (-1, -1) as min and max locations?",
22
- "input": "",
23
- "output": "<p>Below are the relevant sections of code for the <a href=\"http://opencv.itseez.com/modules/core/doc/operations_on_arrays.html?highlight=minmaxloc#void%20minMaxLoc%28InputArray%20src,%20double%2a%20minVal,%20double%2a%20maxVal,%20Point%2a%20minLoc,%20Point%2a%20maxLoc,%20InputArray%20mask%29\" rel=\"nofollow\">minMaxLoc</a> function:</p>\n\n<pre><code>static void ofs2idx(const Mat&amp; a, size_t ofs, int* idx)\n{\n int i, d = a.dims;\n if( ofs &gt; 0 )\n {\n ofs--;\n for( i = d-1; i &gt;= 0; i-- )\n {\n int sz = a.size[i];\n idx[i] = (int)(ofs % sz);\n ofs /= sz;\n }\n }\n else\n {\n for( i = d-1; i &gt;= 0; i-- )\n idx[i] = -1;\n }\n}\n\n}\n\nvoid cv::minMaxIdx(InputArray _src, double* minVal,\n double* maxVal, int* minIdx, int* maxIdx,\n InputArray _mask)\n{\n Mat src = _src.getMat(), mask = _mask.getMat();\n int depth = src.depth(), cn = src.channels();\n\n CV_Assert( (cn == 1 &amp;&amp; (mask.empty() || mask.type() == CV_8U)) ||\n (cn &gt;= 1 &amp;&amp; mask.empty() &amp;&amp; !minIdx &amp;&amp; !maxIdx) );\n MinMaxIdxFunc func = minmaxTab[depth];\n CV_Assert( func != 0 );\n\n const Mat* arrays[] = {&amp;src, &amp;mask, 0};\n uchar* ptrs[2];\n NAryMatIterator it(arrays, ptrs);\n\n size_t minidx = 0, maxidx = 0;\n int iminval = INT_MAX, imaxval = INT_MIN;\n float fminval = FLT_MAX, fmaxval = -FLT_MAX;\n double dminval = DBL_MAX, dmaxval = -DBL_MAX;\n size_t startidx = 1;\n int *minval = &amp;iminval, *maxval = &amp;imaxval;\n int planeSize = (int)it.size*cn;\n\n if( depth == CV_32F )\n minval = (int*)&amp;fminval, maxval = (int*)&amp;fmaxval;\n else if( depth == CV_64F )\n minval = (int*)&amp;dminval, maxval = (int*)&amp;dmaxval;\n\n for( size_t i = 0; i &lt; it.nplanes; i++, ++it, startidx += planeSize )\n func( ptrs[0], ptrs[1], minval, maxval, &amp;minidx, &amp;maxidx, planeSize, startidx );\n\n if( minidx == 0 )\n dminval = dmaxval = 0;\n else if( depth == CV_32F )\n dminval = fminval, dmaxval = fmaxval;\n else if( depth &lt;= CV_32S )\n dminval = iminval, dmaxval = imaxval;\n\n if( minVal )\n *minVal = dminval;\n if( maxVal )\n *maxVal = dmaxval;\n\n if( minIdx )\n ofs2idx(src, minidx, minIdx);\n if( maxIdx )\n ofs2idx(src, maxidx, maxIdx);\n} \n\nvoid cv::minMaxLoc( InputArray _img, double* minVal, double* maxVal,\n Point* minLoc, Point* maxLoc, InputArray mask )\n{\n Mat img = _img.getMat();\n CV_Assert(img.dims &lt;= 2);\n\n minMaxIdx(_img, minVal, maxVal, (int*)minLoc, (int*)maxLoc, mask);\n if( minLoc )\n std::swap(minLoc-&gt;x, minLoc-&gt;y);\n if( maxLoc )\n std::swap(maxLoc-&gt;x, maxLoc-&gt;y);\n}\n</code></pre>\n\n<p>Judging from the code flow it appears you have the case of <code>minidx</code> equal to zero, which then sets <code>dminval = dmaxval = 0</code>. Also, within the <code>ofs2idx</code> function, when <code>minidx</code> (i.e., <code>ofs</code> parameter in <code>ofs2idx</code>) is equal to zero the logic sets the min and max points to (-1, -1) (i.e., <code>idx[i] = -1;</code>). This might happen if your matrix has no elements. What is the size of the matrix you are trying to use?</p>\n",
24
- "system": "you are the stackoverflow assistant"
25
- },
26
- {
27
- "instruction": "How to make a robot follow a line using its video camera",
28
- "input": "",
29
- "output": "<p>Here's one approach, suitable for refinement.</p>\n\n<p>Through a combination of zooming, a pixellation filter, and thresholding, reduce the camera input to a 3 by 3 grid of white or black squares. With suitable adjustment, the reduction should be able to blow up the line such that it takes up exactly three of the reduced pixels. The robot's logic then consists of moving in one of eight directions to keep the center pixel black.</p>\n\n<p>The image after reduction:</p>\n\n<pre><code>\u2610 \u25a0 \u2610\n\u2610 \u25a0 \u2610 \u2191 move forward one unit\n\u2610 \u25a0 \u2610\n</code></pre>\n\n<p>What a left turn looks like:</p>\n\n<pre><code>\u2610 \u2610 \u2610\n\u25a0 \u25a0 \u2610 \u2190 turn 90 degrees left\n\u2610 \u25a0 \u2610\n</code></pre>\n\n<p>This is a very simple scheme, and converting the video input to a clean 3 by 3 grid isn't a trivial task, but it should be enough to get you moving in the right direction.</p>\n",
30
- "system": "you are the stackoverflow assistant"
31
- },
32
- {
33
- "instruction": "Efficiently Group Overlapping Rectangles",
34
- "input": "",
35
- "output": "<p>I'd look at <a href=\"http://pygame.org/docs/ref/rect.html\" rel=\"nofollow\">pygame.Rect.collide</a> methods for a response to your problem. Since rectangle overlapping detection is so common in games, I guess their implementation is quite good in terms of computational complexity.</p>\n",
36
- "system": "you are the stackoverflow assistant"
37
- },
38
- {
39
- "instruction": "Which bar code type is the most robust when used with mobile machine vision and a tool like OpenCV?",
40
- "input": "",
41
- "output": "<p>I will go ahead and suggest <a href=\"http://en.wikipedia.org/wiki/QR_code\" rel=\"nofollow\">QR Code</a></p>\n\n<p>Keep in mind there must be as little blur as possible and the image must be clear. I suggest you scale the QR Codes up so resolution doesn't become a problem(50cm x 50cm maybe?) and do not include a lot of data.</p>\n",
42
- "system": "you are the stackoverflow assistant"
43
- },
44
- {
45
- "instruction": "Algorithm to align and compare two sets of vectors which may be incomplete and ignoring scaling?",
46
- "input": "",
47
- "output": "<p>There's a very clever algorithm for identifying starfields. You find 4 points in a diamond shape and then using the two stars farthest apart you define a coordinate system locating the other two stars. This is scale and rotation invariant because the locations are relative to the first two stars. This forms a hash. You generate several of these hashes and use those to generate candidates. Once you have the candidates you look for ones where multiple hashes have the correct relationships.</p>\n\n<p>This is described in a paper and a presentation on <a href=\"http://astrometry.net/\" rel=\"nofollow\">http://astrometry.net/</a> .</p>\n",
48
- "system": "you are the stackoverflow assistant"
49
- },
50
- {
51
- "instruction": "Dense sift vs sift?",
52
- "input": "",
53
- "output": "<p>The obvious difference is that with dense SIFT you get a SIFT descriptor at\nevery location, while with normal sift you get a SIFT descriptions at the\nlocations determined by Lowe's algorithm.</p>\n\n<p>There are many applications where you require non-dense SIFT, one great example is Lowe's original work.</p>\n\n<p>There are plenty of applications where good results have been obtained by computing a\ndescriptor everywhere (densely) one such example is <a href=\"http://www.cs.brown.edu/~pff/latent/\">this</a>. A descriptor similar to dense SIFT is called HOG or DHOG, they are technically not the same thing but conceptually both based on histograms of gradients and are very similar.</p>\n",
54
- "system": "you are the stackoverflow assistant"
55
- },
56
- {
57
- "instruction": "2D Point Set Matching",
58
- "input": "",
59
- "output": "<p>If the object is reasonably rigid and aligned, simple <a href=\"http://www.google.hu/url?sa=t&amp;rct=j&amp;q=autocorrelation&amp;source=web&amp;cd=1&amp;sqi=2&amp;ved=0CBwQFjAA&amp;url=http://en.wikipedia.org/wiki/Autocorrelation&amp;ei=DofgTt-vAYKh4gTTn9TWBg&amp;usg=AFQjCNF0XVfggjIk1I8jkXSXcHDSNDwGxw&amp;cad=rja\" rel=\"nofollow\">auto-correlation</a> would do the trick.\nIf not, I would use <a href=\"http://www.google.hu/url?sa=t&amp;rct=j&amp;q=ransac&amp;source=web&amp;cd=1&amp;ved=0CB0QFjAA&amp;url=http://en.wikipedia.org/wiki/RANSAC&amp;ei=9obgTtLqHeyP4gTawYyTBw&amp;usg=AFQjCNH5aqFVOBzJ7AAnXWK83fuHF4NN1Q&amp;cad=rja\" rel=\"nofollow\">RANSAC</a> to estimate the transformation between the subject and the template (it seems that you have the feature points). Please provide some details on the problem.</p>\n\n<p><strong>Edit:</strong>\nRANSAC (Random Sample Consensus) could be used in your case. Think about unnecessary points in your template as noise (false features detected by a feature detector) - they are the outliners. RANSAC could handle outliners, because it choose a small subset of feature points (the minimal amount that could initiate your model) randomly, initiates the model and calculates how well your model match the given data (how many other points in the template correspond to your other points). If you choose wrong subset, this value will be low and you will drop the model. If you choose right subset it will be high and you could improve your match with an LMS algorithm.</p>\n",
60
- "system": "you are the stackoverflow assistant"
61
- },
62
- {
63
- "instruction": "Why would OpenCV&#39;s cascade classifier detection give way too large of an output?",
64
- "input": "",
65
- "output": "<p>Shouldn't <code>br.y = bodies[0].x + bodies[0].height;</code> be </p>\n\n<p><code>br.y = bodies[0].y + bodies[0].height;</code> ? </p>\n\n<p>Just an idea...</p>\n",
66
- "system": "you are the stackoverflow assistant"
67
- },
68
- {
69
- "instruction": "How to create a single line/edge from a set of superimposing lines/edges in MATLAB?",
70
- "input": "",
71
- "output": "<p>Peter Kovesi's <a href=\"http://www.csse.uwa.edu.au/~pk/research/matlabfns/\" rel=\"nofollow\">CV website</a> contains a great set of functions for line detection. Look at <a href=\"http://www.csse.uwa.edu.au/~pk/research/matlabfns/LineSegments/example/index.html\" rel=\"nofollow\">this</a> example of using them.</p>\n",
72
- "system": "you are the stackoverflow assistant"
73
- },
74
- {
75
- "instruction": "How to detect the intersection point between two lines formed from a cloud of points",
76
- "input": "",
77
- "output": "<p>You have very nice corner in the center. Try Harris Corner Detector and enjoy sub-pixel accuracy ;) \n<a href=\"http://en.wikipedia.org/wiki/Corner_detection\" rel=\"nofollow\">http://en.wikipedia.org/wiki/Corner_detection</a></p>\n\n<p>don't have matlab link, but you can read about Harris in OpenCV docs\n<a href=\"http://opencv.willowgarage.com/documentation/cpp/imgproc_feature_detection.html?#cornerHarris\" rel=\"nofollow\">http://opencv.willowgarage.com/documentation/cpp/imgproc_feature_detection.html?#cornerHarris</a>\nhttp://opencv.willowgarage.com/documentation/cpp/imgproc_feature_detection.html?#cornerSubPix</p>\n",
78
- "system": "you are the stackoverflow assistant"
79
- },
80
- {
81
- "instruction": "Comparing two images visually",
82
- "input": "",
83
- "output": "<ol>\n<li>Resize both images to the same, small size, such as 16x12 or 90x60 (depending on your threshold).</li>\n<li>Reduce the color depth to 4 or 8 bits per pixel (not palettized). You might use a posterize function for this.</li>\n</ol>\n\n<p>Then then see if the two smaller images are duplicates. If so, the originals must be pretty close.</p>\n",
84
- "system": "you are the stackoverflow assistant"
85
- },
86
- {
87
- "instruction": "Accurately measuring relative distance between a set of fiducials (Augmented reality application)",
88
- "input": "",
89
- "output": "<p>You can use a <a href=\"http://mathworld.wolfram.com/LeastSquaresFitting.html\" rel=\"nofollow\">least-squares</a> method, to find the transformation that gives the best fit to all your data. If all you want is the distance between the markers, this is just the average of the distances measured.</p>\n\n<p>Assuming that your marker positions are fixed (e.g., to a fixed rigid body), and you want their relative position, then you can simply record their positions and average them. If there is a potential for confusing one marker with another, you can track them from frame to frame, and use the continuity of each marker location between its two periods to confirm its identity.</p>\n\n<p>If you expect your rigid body to be moving (or if the body is not rigid, and so forth), then your problem is significantly harder. Two markers at a time is not sufficient to fix the position of a rigid body (which requires three). However, note that, at each transition, you have the location of the old marker, the new marker, and the continuous marker, at almost the same time. If you already have an expected location on the body for each of your markers, this should provide a good estimate of a rigid pose every 20 frames.</p>\n\n<p>In general, if your body is moving, best performance will require some kind of model for its dynamics, which should be used to track its pose over time. Given a dynamic model, you can use a <a href=\"http://en.wikipedia.org/wiki/Kalman_filter\" rel=\"nofollow\">Kalman filter</a> to do the tracking; Kalman filters are well-adapted to integrating the kind of data you describe.</p>\n\n<p>By including the locations of your markers as part of the Kalman state vector, you may be able to be able to deduce their relative locations from purely sensor data (which appears to be your goal), rather than requiring this information a priori. If you want to be able to handle an arbitrary number of markers efficiently, you may need to come up with some clever mutation of the usual methods; your problem seems designed to avoid solution by conventional decomposition methods such as sequential Kalman filtering.</p>\n\n<hr>\n\n<p>Edit, as per the comments below:</p>\n\n<p>If your markers yield a full 3D pose (instead of just a 3D position), the additional data will make it easier to maintain accurate information about the object you are tracking. However, the recommendations above still apply:</p>\n\n<ul>\n<li>If the labeled body is fixed, use a least-squares fit of all relevant frame data.</li>\n<li>If the labeled body is moving, model its dynamics and use a Kalman filter.</li>\n</ul>\n\n<p>New points that come to mind:</p>\n\n<ul>\n<li>Trying to manage a chain of relative transformations may not be the best way to approach the problem; as you note, it is prone to accumulated error. However, it is not necessarily a bad way, either, as long as you can implement the necessary math in that framework.</li>\n<li>In particular, a least-squares fit should work perfectly well with a chain or ring of relative poses.</li>\n<li>In any case, for either a least-squares fit or for Kalman filter tracking, a good estimate of the uncertainty of your measurements will improve performance.</li>\n</ul>\n",
90
- "system": "you are the stackoverflow assistant"
91
- },
92
- {
93
- "instruction": "Image is warped after stereo rectification",
94
- "input": "",
95
- "output": "<p>I realise this question is a few years old however, I have recently had a similar issue. Building on <a href=\"https://stackoverflow.com/users/661211/morynicz\">morynicz</a> answer about \"bad chessboard\" patterns to calibrate stereo images, I found that even with a slight deformation in your chessboard pattern, for example that it isn't flat, can produce large warping in the stereo image pair on rectification. The algorithms in <a href=\"http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html\" rel=\"nofollow noreferrer\">OpenCV</a>, for instance, assume a flat chessboard pattern is being presented such that any physical deformation in that pattern will be wrongly attributed to distortions in the camera optics (or in the relative orientations of the two camera sensors). The algorithms will then try really hard to remove this false distortion leading to very warped images. </p>\n\n<p>To avoid this problem, were possible, use a tablet (or other electronic screen) to display the chessboard pattern as it is then guaranteed to be flat.</p>\n\n<p>Additionally, you should check that the images you are using to calibrate the stereo pair are in focus and have no motion blur or image tearing.</p>\n\n<p>If using OpenCV to do the rectification do some experimentation with the flags used in the <a href=\"http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#stereocalibrate\" rel=\"nofollow noreferrer\">stereoCalibrate</a> function as this may lead to a more \"optimised\" rectification for your particular application.</p>\n",
96
- "system": "you are the stackoverflow assistant"
97
- },
98
- {
99
- "instruction": "Matching virtual camera to real camera",
100
- "input": "",
101
- "output": "<p>OpenCV includes tools and a great theoretical discussion on <a href=\"http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html\" rel=\"nofollow\">camera calibration models</a>, and the approaches that can be used to correct for distortions. An inverse model could be used to distort the generated 3d mesh.</p>\n",
102
- "system": "you are the stackoverflow assistant"
103
- },
104
- {
105
- "instruction": "OpenCV and Computer Vision, where do we stand now?",
106
- "input": "",
107
- "output": "<p>I have had similar problems getting started with OpenCV and from my experience this is actually the biggest hurdle to learning it. Here is what worked for me:</p>\n\n<ol>\n<li><p>This book: \"OpenCV 2 Computer Vision Application Programming Cookbook.\" It's the most up-to-date book and has examples on how to solve different Computer Vision problems (You can see the table of contents on Amazon with \"Look Inside!\"). It really helped ease me into OpenCV and get comfortable with how the library works.</p></li>\n<li><p>Like have others have said, the samples are very helpful. For things that the book skips or covers only briefly you can usually find more detailed examples when looking through the samples. You can also find different ways of solving the same problem between the book and the samples. For example, for finding keypoints/features, the book shows an example using FAST features:</p>\n\n<p><code>vector&lt;KeyPoint&gt; keypoints;</code><br />\n<code>FastFeatureDetector fast(40);</code><br />\n<code>fast.detect(image, keypoints);</code></p>\n\n<p>But in the samples you will find a much more flexible way (if you want to have the option of choosing which keypoint detection algorithm to use):</p>\n\n<p><code>vector&lt;KeyPoint&gt; keypoints;</code><br />\n<code>Ptr&lt;FeatureDetector&gt; featureDetector = FeatureDetector::create(\"FAST\");</code><br />\n<code>featureDetector-&gt;detect(image, keypoints);</code></p></li>\n</ol>\n\n<p>From my experience things eventually start to click and for more specific questions you start finding up-to-date information on blogs or right here on StackOverflow.</p>\n",
108
- "system": "you are the stackoverflow assistant"
109
- },
110
- {
111
- "instruction": "accurately cropping image via saliency map",
112
- "input": "",
113
- "output": "<p>In order to find a better estimate of the salient objects' boundaries, I suggest you'd use some foreground extraction algorithm. One popular algorithm for that task is called <a href=\"http://research.microsoft.com/en-us/um/cambridge/projects/visionimagevideoediting/segmentation/grabcut.htm\" rel=\"nofollow\">GrabCut</a>. It requires an initial estimate of the object boundaries. For the task in hand, the initial estimate would be the boundaries of the blobs from the saliency map.</p>\n",
114
- "system": "you are the stackoverflow assistant"
115
- },
116
- {
117
- "instruction": "What libraries can I use to modify a video?",
118
- "input": "",
119
- "output": "<p><a href=\"http://opencv.willowgarage.com/wiki/\" rel=\"nofollow\"><strong>OpenCV</strong></a>: (Open Source Computer Vision) is a <strong>cross-platform</strong> library of programming functions for real time computer vision.</p>\n\n<p>It provides interfaces for both C and C++ programming laguages.</p>\n\n<p>As <strong>for detecting the text region and drawing boxes around it</strong>, you can take a look at <a href=\"http://felix.abecassis.me/2011/10/opencv-bounding-box-skew-angle/\" rel=\"nofollow\"><strong>this article</strong></a>, which explains how to do this stuff using OpenCV. <strong>For better OCR capabilities</strong> I think that <a href=\"http://code.google.com/p/tesseract-ocr/\" rel=\"nofollow\"><strong>tesseract</strong></a> is the best open source tool available right now.</p>\n\n<p>I've worked on a similar project some time ago and used OpenCV to detect the text region and then tesseract to do proper text recognition.</p>\n",
120
- "system": "you are the stackoverflow assistant"
121
- },
122
- {
123
- "instruction": "cvBlob/Opencv: Why is my output variable empty?",
124
- "input": "",
125
- "output": "<p>Will guess here but not too many times I've seen instances of IplImages that are not actually pointers. Are you sure that <em>image</em>, wherever it's coming from, isn't also a pointer to an IplImage struct?</p>\n\n<pre><code> IplImage *blobimg = image;\n</code></pre>\n",
126
- "system": "you are the stackoverflow assistant"
127
- },
128
- {
129
- "instruction": "Compare SURF descriptors in one image to a list of descriptors in other images",
130
- "input": "",
131
- "output": "<p>You can try <a href=\"http://people.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN\" rel=\"nofollow\">FLANN</a> it on C or C++. It is not too complicated.</p>\n\n<p>However, I am testing FLANN on C++, but matching time (using SURF features, FLANN, query on 1,000 images) is very long, from 20 seconds to 400 seconds (depending on number of feature vectors per image).</p>\n",
132
- "system": "you are the stackoverflow assistant"
133
- },
134
- {
135
- "instruction": "How to provide region of interest (ROI) for edge detection and corner detection in Matlab?",
136
- "input": "",
137
- "output": "<p>Is this what you mean?</p>\n\n<pre><code>BW = poly2mask(c,r,m,n);\nframe = dummy_frame .* BW;\n</code></pre>\n",
138
- "system": "you are the stackoverflow assistant"
139
- },
140
- {
141
- "instruction": "How to pre-digitize a video? Or how to apply anti-aliasing to a video file?",
142
- "input": "",
143
- "output": "<p>Your video is already digitised. So, nothing to be done here.</p>\n\n<p>And the blur you get is on almost all the videos you can ever see. The problem (or the good fact) is that when watching a video playing, the human eye does not see the blur. However, when stopping on a frame, you can see it.</p>\n\n<p>In computer vision, there are more approaches to solve the blur problem:</p>\n\n<ul>\n<li>Ignore it. Hope it is not too much to corrupt your image</li>\n<li>Buy a proffesional camera, made for computer vision. It can reduce most blur, even in extremely demanding situations. It is not cheap.</li>\n<li>Correct it using superpixel algorithms.</li>\n<li>Build your algorithms around a method that is insensitive to blur (phase correlation, by example). </li>\n</ul>\n",
144
- "system": "you are the stackoverflow assistant"
145
- },
146
- {
147
- "instruction": "How to transform a video (using MATLAB) which was taken at an offset to correct viewing angle?",
148
- "input": "",
149
- "output": "<p>For a super easy fix, go back to the scene and take the video again. This time, make sure the circle look like a circle.</p>\n\n<p>That being said, this is an interesting topic in the academia. I believe there's various solutions/articles that are aimed to solve this kind of problem. Base on your reputation, I believe you already know that, but still wanted to give Stackoverflow members a shot at answering this problem. So here it goes.</p>\n\n<p>For an easy fix, you can start with <a href=\"http://www.mathworks.com/matlabcentral/fileexchange/2805-trans-camera-view/content/func_transcameraview/func_transcameraview.m\" rel=\"nofollow\">this function</a>, by guessing the camera location by trial and error until you find an acceptable transformation to your image (a frame of the video). The function does not work right out of the box, you have to debug it a little bit.</p>\n\n<p>If you have access to the (virtual) scene of the image, you can take an image. Base on mutual feature points from the new image and the original image, register the two images (and get the transformation) (<a href=\"http://www.mathworks.com/matlabcentral/fileexchange/22337-robust-nonrigid-point-set-registration\" rel=\"nofollow\">ex1</a>, <a href=\"http://www.mathworks.com/matlabcentral/fileexchange/28305-estimaterigidtransform\" rel=\"nofollow\">ex2</a>).</p>\n\n<p>Finally, apply the same transformation to each frame of the video.</p>\n\n<p>To answer your second question, though the camera location is fixed, there may be objects moving in the scene. So applying the same transformation to every frame will only correct the objects that are still. So it's not ideal. In the end, it depends on what the aims of the project is and how this non/correction affects the project aims.</p>\n",
150
- "system": "you are the stackoverflow assistant"
151
- },
152
- {
153
- "instruction": "How to provide region of interest (ROI) for edge detection and corner detection in Matlab?",
154
- "input": "",
155
- "output": "<p>If you already extracted the frames, you can use <strong>imroi</strong> and <strong>roifilt2</strong>. </p>\n\n<p>Check out the following section from help:<br>\n<a href=\"http://www.mathworks.com/help/toolbox/images/f19-12807.html\" rel=\"nofollow\">http://www.mathworks.com/help/toolbox/images/f19-12807.html</a></p>\n",
156
- "system": "you are the stackoverflow assistant"
157
- },
158
- {
159
- "instruction": "OpenCV python&#39;s API: FlannBasedMatcher",
160
- "input": "",
161
- "output": "<p>Looking in the examples provided by OpenCV 2.3.1 under the python2 folder, I found an implementation of a flann based match function which doesn't rely on the FlanBasedMatcher object.</p>\n\n<p>Here is the code:</p>\n\n<pre><code>FLANN_INDEX_KDTREE = 1 # bug: flann enums are missing\n\nflann_params = dict(algorithm = FLANN_INDEX_KDTREE,\n trees = 4)\n\ndef match_flann(desc1, desc2, r_threshold = 0.6):\n flann = cv2.flann_Index(desc2, flann_params)\n idx2, dist = flann.knnSearch(desc1, 2, params = {}) # bug: need to provide empty dict\n mask = dist[:,0] / dist[:,1] &lt; r_threshold\n idx1 = np.arange(len(desc1))\n pairs = np.int32( zip(idx1, idx2[:,0]) )\n return pairs[mask]\n</code></pre>\n",
162
- "system": "you are the stackoverflow assistant"
163
- },
164
- {
165
- "instruction": "an issue of getting Matrix for training sift features",
166
- "input": "",
167
- "output": "<p>Have you looked at the <a href=\"https://code.ros.org/svn/opencv/trunk/opencv/samples/cpp/descriptor_extractor_matcher.cpp\" rel=\"nofollow\">descriptor_extractor_matcher.cpp</a>, or the <a href=\"https://code.ros.org/svn/opencv/trunk/opencv/samples/cpp/matcher_simple.cpp\" rel=\"nofollow\">matcher_simple.cpp</a> samples from OpenCV? Also, could you post the code you are using to detect the features?</p>\n",
168
- "system": "you are the stackoverflow assistant"
169
- },
170
- {
171
- "instruction": "Robust Hand Detection via Computer Vision",
172
- "input": "",
173
- "output": "<p>Have you taken a look at the camshift paper by Gary Bradski? You can download it from <a href=\"http://www.google.com.sg/url?sa=t&amp;rct=j&amp;q=real%20time%20face%20and%20object%20tracking%20as%20a%20component%20of%20a%20perceptual%20user%20interface&amp;source=web&amp;cd=2&amp;ved=0CCgQFjAB&amp;url=http://www.cvmt.dk/education/teaching/CVG9ExercisePapers/RealTimeFaceandObjectTracking.pdf&amp;ei=koHyTp6QB8bZrQeu6eT7Dw&amp;usg=AFQjCNHKZAbqgDojO3UCIwSuw087ZGzXgw\" rel=\"noreferrer\">here</a></p>\n\n<p>I used the the skin detection algorithm a year ago for detecting skin regions for hand tracking and it is robust. It depends on how you use it.</p>\n\n<p>The first problem with using color for tracking is that it is not robust to lighting variations or like you mentioned, when people have different skin tones. However this can be solved easily as mentioned in the paper by:</p>\n\n<ol>\n<li>Convert image to HSV color space.</li>\n<li>Throw away the V channel and consider the H and S channel and hence\ndiscount for lighting variations.</li>\n<li>Threshold pixels with low saturation due to their instability.</li>\n<li>Bin the selected skin region into a 2D histogram. (OpenCV\"s <a href=\"http://opencv.itseez.com/modules/imgproc/doc/histograms.html?highlight=calchist#cv2.calcHist\" rel=\"noreferrer\">calcHist</a>\nfunction) This histogram now acts as a model for skin.</li>\n<li>Compute the \"backprojection\" (i.e. use the histogram to compute the \"probability\"\nthat each pixel in your image has the color of skin tone) using <a href=\"http://opencv.itseez.com/modules/imgproc/doc/histograms.html?highlight=calchist#calcbackproject\" rel=\"noreferrer\">calcBackProject</a>. Skin\nregions will have high values.</li>\n<li>You can then either use <a href=\"http://opencv.itseez.com/modules/video/doc/motion_analysis_and_object_tracking.html?highlight=meanshift#cv2.meanShift\" rel=\"noreferrer\">meanShift</a> to look for the mode of the 2D\n\"probability\" map generated by backproject or to detect blobs of\nhigh \"probability\".</li>\n</ol>\n\n<p>Throwing away the V channel in HSV and only considering H and S channels is really enough (surprisingly) to detect different skin tones and under different lighting variations. A plus side is that its computation is fast.</p>\n\n<p>These steps and the corresponding code can be found in the original <a href=\"https://rads.stackoverflow.com/amzn/click/com/0596516134\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">OpenCV book</a>.</p>\n\n<p>As a side note, I've also used Gaussian Mixture Models (GMM) before. If you are only considering color then I would say using histograms or GMM makes not much difference. In fact the histogram would perform better (if your GMM is not constructed to account for lighting variations etc.). GMM is good if your sample vectors are more sophisticated (i.e. you consider other features) but speed-wise histogram is much faster because computing the probability map using histogram is essentially a table lookup whereas GMM requires performing a matrix computation (for vector with dimension > 1 in the formula for multi-dimension gaussian distribution) which can be time consuming for real time applications.</p>\n\n<p>So in conclusion, if you are only trying to detect skin regions using color, then go with the histogram method. You can adapt it to consider local gradient as well (i.e. histogram of gradients but possibly not going to the full extent of Dalal and Trigg's human detection algo.) so that it can differentiate between skin and regions with similar color (e.g. cardboard or wooden furniture) using the local texture information. But that would require more effort.</p>\n\n<p>For sample source code on how to use histogram for skin detection, you can take a look at OpenCV\"s page <a href=\"http://opencv.itseez.com/doc/tutorials/imgproc/histograms/back_projection/back_projection.html?highlight=mixchannel\" rel=\"noreferrer\">here</a>. But do note that it is mentioned on that webpage that they only use the hue channel and that using both hue and saturation would give better result.</p>\n\n<p>For a more sophisticated approach, you can take a look at the work on \"Detecting naked people\" by Margaret Fleck and David Forsyth. This was one of the earlier work on detecting skin regions that considers both color and texture. The details can be found <a href=\"http://www.cs.uiuc.edu/homes/mfleck/naked.html\" rel=\"noreferrer\">here</a>.</p>\n\n<p>A great resource for source code related to computer vision and image processing, which happens to include code for visual tracking can be found <a href=\"http://www.csee.wvu.edu/~xinl/source.html\" rel=\"noreferrer\">here</a>. And not, its not OpenCV.</p>\n\n<p>Hope this helps.</p>\n",
174
- "system": "you are the stackoverflow assistant"
175
- },
176
- {
177
- "instruction": "fingertip detection and tracking",
178
- "input": "",
179
- "output": "<p><a href=\"http://www.cse.unr.edu/~amayeh/index_files/papers/cviu09.pdf\" rel=\"nofollow\">Here</a> is a pretty in-depth paper on finger segmentation using Zernike moments. <a href=\"http://peanuts.csie.ntu.edu.tw/~scan33/plugin-00055109.pdf\" rel=\"nofollow\">Here</a> is a good paper on using Zernike moments for image recognition as a basis for the first paper.</p>\n",
180
- "system": "you are the stackoverflow assistant"
181
- },
182
- {
183
- "instruction": "How do I detect an instance of an object in an image?",
184
- "input": "",
185
- "output": "<p>Since you <a href=\"https://stackoverflow.com/revisions/8591515/1\">originally posted this</a> as a 'gimme-da-codez' question, showing absolutely no effort, I'm <strong>not</strong> going to give you the code. I will describe the approach in general terms, with hints along the way and it's up to you to figure out the exact code to do it.</p>\n\n<p>Firstly, if you have a template, a larger image, and you want to find instances of that template in the image, <strong>always think of <a href=\"http://en.wikipedia.org/wiki/Cross-correlation\" rel=\"nofollow noreferrer\">cross-correlation</a></strong>. The theory is the same whether you're processing 1D signals (called a matched filter in signal processing) or 2D images. </p>\n\n<ul>\n<li>Cross-correlating an image with a known template gives you a peak wherever the template is an exact match. Look up the function <a href=\"http://www.mathworks.com/help/toolbox/images/ref/normxcorr2.html\" rel=\"nofollow noreferrer\"><code>normxcorr2</code></a> and understand the example in the documentation.</li>\n<li>Once you find the peak, you'll have to account for the offset from the actual location in the original image. The offset is related to the fact that cross-correlating an <code>N</code> point signal with an <code>M</code> point signal results in an <code>N + M -1</code> point output. This should be clear once you read up on cross-correlation, but you should also look at the example in the doc I mentioned above to get an idea.</li>\n</ul>\n\n<p>Once you do these two, then the rest is trivial and just involves cosmetic dressing up of your result. Here's my result after detecting the object following the above.</p>\n\n<p><img src=\"https://i.sstatic.net/TKxQY.png\" alt=\"enter image description here\"></p>\n\n<p>Here's a few code hints to get you going. Fill in the rest wherever I have <code>...</code></p>\n\n<pre><code>%#read &amp; convert the image\nimgCol = imread('https://i.sstatic.net/tbnV9.jpg');\nimgGray = rgb2gray(img);\nobj = rgb2gray(imread('https://i.sstatic.net/GkYii.jpg'));\n\n%# cross-correlate and find the offset\ncorr = normxcorr2(...); \n[~,indx] = max(abs(corr(:))); %# Modify for multiple instances (generalize)\n[yPeak, xPeak] = ind2sub(...);\ncorrOffset = [yPeak - ..., xPeak - ...]; \n\n%# create a mask\nmask = zeros(size(...));\nmask(...) = 1;\nmask = imdilate(mask,ones(size(...)));\n\n%# plot the above result\nh1 = imshow(imgGray);\nset(h1,'AlphaData',0.4)\nhold on\nh2 = imshow(imgCol);\nset(h2,'AlphaData',mask)\n</code></pre>\n",
186
- "system": "you are the stackoverflow assistant"
187
- },
188
- {
189
- "instruction": "OpenCV 2.0: where is adaptors.py?",
190
- "input": "",
191
- "output": "<p>The answer you have linked is quite outdated. The reason adaptors is gone now is because it is no longer needed. Python bindings for OpenCV images now support the array interface (<code>__array_struct__</code> and <code>__array_interface__</code>), so you should be able to just use <code>Image.fromarray</code> method from PIL to go one way, and <code>cv.fromarray</code> or <code>cv.SetData</code> to go the other way. </p>\n",
192
- "system": "you are the stackoverflow assistant"
193
- },
194
- {
195
- "instruction": "Crowd video real time object recognition and collision detection library or tool recommendations",
196
- "input": "",
197
- "output": "<p>Short answer: Not quite</p>\n\n<p>Long answer: There are some libraries performing what you want to do, but all of them, as I know, are commercial stuff, they are expensive, and they are used in security applications. It's quite unlikely that you will get an SDK at a reasonable price, and be flexible enough to let you play with it. (Most likely it will output just number of people, or things like that)</p>\n\n<p>If you still want to do the job, do not get discouraged: OpenCV is a great tool to help you, but it will definitely eat a significant amount of your time. Google <code>crowd counting/tracking/detection in video</code>, find a good paper, and implement it. But be aware that it may need deep mathematical and image processing knowledge to do it. </p>\n\n<p>Processing speed advice: It will not work in real time, unless you write a parallelizable algorithm, and run it on multiple GPUs/CPUs, and tune it carefully. However, with a good implementation, you should have reasonable speed.</p>\n\n<p>And, last advice: if you really want to do it, to work in C++ will be the least difficult part. It's much better to use the official opencv (c++) than the C# wrapper (better docs, up-to-date functions, all the functionalities are there for you). I do not know of any serious project using .NET wrappers - just students playing around.</p>\n",
198
- "system": "you are the stackoverflow assistant"
199
- },
200
- {
201
- "instruction": "Following and replacing a marker in a real time video stream",
202
- "input": "",
203
- "output": "<p>This is called <a href=\"http://en.wikipedia.org/wiki/Video_tracking\" rel=\"nofollow\">tracking</a>. There are many references on the Wikipedia page to get you started.</p>\n",
204
- "system": "you are the stackoverflow assistant"
205
- },
206
- {
207
- "instruction": "calibration of two images using 8 points",
208
- "input": "",
209
- "output": "<p>A good calibration comes from using chessboard calibration:</p>\n\n<p><a href=\"http://dasl.mem.drexel.edu/~noahKuntz/openCVTut10.html\" rel=\"nofollow\">Camera calibration</a></p>\n",
210
- "system": "you are the stackoverflow assistant"
211
- },
212
- {
213
- "instruction": "Hidden Markov Models with C++",
214
- "input": "",
215
- "output": "<p>You can take a look at <a href=\"http://www.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/tutorial%20on%20hmm%20and%20applications.pdf\" rel=\"noreferrer\">http://www.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/tutorial%20on%20hmm%20and%20applications.pdf</a> for the theory behind HMMs. It's not hard to implement the algorithms yourself.</p>\n\n<p>For a C-based version, you can take a look at my implementation, <a href=\"http://code.google.com/p/accelges/\" rel=\"noreferrer\">http://code.google.com/p/accelges/</a>, which I've done for a Google Summer of Code project.</p>\n",
216
- "system": "you are the stackoverflow assistant"
217
- },
218
- {
219
- "instruction": "Supervised Motion Detection Library",
220
- "input": "",
221
- "output": "<p>For such tasks (do you have a concrete task? <em>motion detection</em> is a very vast term...) the requirements largely vary according to:</p>\n\n<ul>\n<li>Camera calibration, coordinate transformations, pose estimation</li>\n<li>Feature extraction (Scale/Rotation/Translation/Colorization Invariance)</li>\n<li>Concept drift (according to the given features)</li>\n<li>...</li>\n</ul>\n\n<p>For a simple application things are often specialized and simplified to make the task more developer-friendly and less error-prone. Therefore I think, there is no wheel (in the sense of a convenient all-purpose solution) that you would reinvent.</p>\n\n<p>But have you noticed <a href=\"http://info.ee.surrey.ac.uk/Personal/Z.Kalal/tld.html\" rel=\"noreferrer\">TLD</a>? Apparantly it's an object tracking library that uses supervised learning to handle concept drift and such. There are some really cool demonstrations available, for example <a href=\"http://www.youtube.com/watch?v=1GhNXHCQGsM\" rel=\"noreferrer\">this</a>.</p>\n",
222
- "system": "you are the stackoverflow assistant"
223
- },
224
- {
225
- "instruction": "Regarding a line through origin in camera coordinate",
226
- "input": "",
227
- "output": "<p>It's not possible until you have more information about what something like <code>(u, v)</code> represents. Think of it this way. Suppose you claimed you could figure it out just based on <code>(u, v)</code> and <code>r</code>. Now, what if I just relabeled your pixels? A pixel doesn't have to represent any specific distance, so if I said (125, 100) was (250, 200) instead, that would make sense too. Suppose I just swap in a higher resolution chip for a lower resolution chip.</p>\n\n<p>To actually recover <code>(p, q)</code>, you'd have to know what physical distance a pixel corresponds to. You'd also have to know whether the pinhole in your camera model is (0,0) in your pixel reference frame, etc.</p>\n",
228
- "system": "you are the stackoverflow assistant"
229
- },
230
- {
231
- "instruction": "How to detect 45 degree edges in an image",
232
- "input": "",
233
- "output": "<p>What is wrong with using an diagonal structure element and simply convolve the image??</p>\n\n<h2>Details</h2>\n\n<p>Please read <a href=\"http://en.wikipedia.org/wiki/Sobel_operator#Formulation\" rel=\"noreferrer\">here</a> and it should become clear how to build the structuring element. If you are familiar with convolution than you can build a simple structure matrix which amplifies diagonals without theory</p>\n\n<pre><code>{ 0, 1, 2}, \n{-1, 0, 1}, \n{-2, -1, 0}\n</code></pre>\n\n<p>The idea is: You want to amplify pixel in the image, where 45deg below it is something different than 45deg above it. Thats the case when you are at a 45deg edge.</p>\n\n<p>Taking an example. Following picture</p>\n\n<p><img src=\"https://i.sstatic.net/8SX80.png\" alt=\"enter image description here\"></p>\n\n<p>convolved by the above matrix gives a graylevel image where the highest pixel values have those lines which are exactly 45deg. </p>\n\n<p><img src=\"https://i.sstatic.net/dKkGG.png\" alt=\"enter image description here\"></p>\n\n<p>Now the approach is to simply binarize the image. Et voila</p>\n\n<p><img src=\"https://i.sstatic.net/XNjT4.png\" alt=\"enter image description here\"></p>\n",
234
- "system": "you are the stackoverflow assistant"
235
- },
236
- {
237
- "instruction": "Slow running orientation transform code",
238
- "input": "",
239
- "output": "<p>Move <code>mOrientationBins[iReadBin].end&lt;float&gt;()</code> outside the for loop.</p>\n",
240
- "system": "you are the stackoverflow assistant"
241
- },
242
- {
243
- "instruction": "Implements covolutional neural network in feature extraction",
244
- "input": "",
245
- "output": "<p>Here is an example I've found with a quick web search:</p>\n\n<ul>\n<li><a href=\"http://www.codeproject.com/KB/library/NeuralNetRecognition.aspx\" rel=\"noreferrer\">http://www.codeproject.com/KB/library/NeuralNetRecognition.aspx</a></li>\n</ul>\n",
246
- "system": "you are the stackoverflow assistant"
247
- },
248
- {
249
- "instruction": "How to detect exact, predefined shapes with hough transform, like a &quot;W&quot;?",
250
- "input": "",
251
- "output": "<p>A quick Google search yields the following information of interest:</p>\n\n<p><a href=\"http://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;ved=0CCIQFjAA&amp;url=http://www.cs.utexas.edu/~dana/HoughT.pdf&amp;ei=jjTsTovPHcGutweDtIilCg&amp;usg=AFQjCNH6-ZY0WsOe5XtMHMGeUZMioi3ghw&amp;sig2=f2h9nkieA9sEekR9lYKfTg\" rel=\"nofollow\">Generalizing the Hough Transform to Detect Arbitrary Shapes</a></p>\n\n<p>and it looks like a <a href=\"http://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=2&amp;ved=0CCwQFjAB&amp;url=http://www.eecs.umich.edu/~silvio/teaching/EECS598/lectures/lecture10_3.pdf&amp;ei=jjTsTovPHcGutweDtIilCg&amp;usg=AFQjCNHCsoWNWGDaX5OpINBD9S2FYMzr6w&amp;sig2=LltMNp4JDP6ryAo3GkN25A\" rel=\"nofollow\">lecture</a> using the above paper as its source.</p>\n\n<p>Also, if it's an actual \"W\", would an OCR engine like <a href=\"http://code.google.com/p/tesseract-ocr/\" rel=\"nofollow\">Tesseract</a> be better suited to your needs?</p>\n",
252
- "system": "you are the stackoverflow assistant"
253
- },
254
- {
255
- "instruction": "What i need to know to understand augmented reality",
256
- "input": "",
257
- "output": "<p>Here are some thoughts which may help you:</p>\n\n<h2>MUST (Wikipedia's entries)</h2>\n\n<ul>\n<li><a href=\"http://en.wikipedia.org/wiki/Virtual_reality\" rel=\"nofollow\">Virtual reality</a>.</li>\n<li><a href=\"http://en.wikipedia.org/wiki/Augmented_reality\" rel=\"nofollow\">Augmented reality</a> (french version available <a href=\"http://en.wikipedia.org/wiki/Augmented_reality\" rel=\"nofollow\">here</a>)</li>\n</ul>\n\n<p>They provide links pointing to many relevant references.</p>\n\n<h2>FORUMS (Picked at random)</h2>\n\n<ul>\n<li><a href=\"http://www.augmented-reality-forum.com/index.php\" rel=\"nofollow\">AR Augmented Reality Forum</a></li>\n<li><a href=\"http://www.armeetup.org/forum/\" rel=\"nofollow\">The International Augmented Reality Meetup Groups Forum</a></li>\n<li><a href=\"http://forum.la-realite-augmentee.fr/\" rel=\"nofollow\">Forum La r\u00e9alit\u00e9 augment\u00e9e</a> (in french)</li>\n</ul>\n\n<p>To keep up with the state of the art.</p>\n\n<h2>COURSE SYLLABUS (Picked at random)</h2>\n\n<ul>\n<li><a href=\"https://wiki.engr.illinois.edu/display/cs498mar/Course+syllabus\" rel=\"nofollow\">University of Illinois </a></li>\n</ul>\n\n<p>To have a glimpse of what is actually taught.</p>\n\n<h2>OPENSOURCE PROJECTS (Picked at random)</h2>\n\n<ul>\n<li><a href=\"http://code.google.com/hosting/search?q=label%3aAugmentedReality\" rel=\"nofollow\">Google code</a></li>\n<li><a href=\"http://sourceforge.net/directory/os%3awindows/?q=AUGMENTED%20REALITY\" rel=\"nofollow\">Sourceforge</a></li>\n</ul>\n\n<p>To get involved hands-on in down-to-the-metal coding.</p>\n",
258
- "system": "you are the stackoverflow assistant"
259
- },
260
- {
261
- "instruction": "Tracking multi-touch movements inside the frame with transmitters and receivers",
262
- "input": "",
263
- "output": "<p>As you point out in your diagram, with two fingers different finger positions can give the same sensor readings, so you may have some irreducible uncertainty, unless you find some clever way to use previous history or something.</p>\n\n<p>Do you actually need to know the position of each finger? Is this the right abstraction for this situation? Perhaps you could get a reasonable user interface if you limited yourself to one finger for precise pointing, and recognised e.g. gesture commands by some means that did not use an intermediate representation of finger positions. Can you find gestures that can be easily distinguished from each other given the raw sensor readings?</p>\n\n<p>I suppose the stereotypical computer science approach to this would be to collect the sensor readings from different gestures, throw them at some sort of machine learning box, and hope for the best. You might also try drawing graphs of how the sensor readings change over time for the different gestures and looking at them to see if anything obvious stands out. If you do want to try out machine learning algorithms, <a href=\"http://www.cs.waikato.ac.nz/ml/weka/\" rel=\"nofollow\">http://www.cs.waikato.ac.nz/ml/weka/</a> might be a good start.</p>\n",
264
- "system": "you are the stackoverflow assistant"
265
- },
266
- {
267
- "instruction": "What is the simplest *correct* method to detect rectangles in an image?",
268
- "input": "",
269
- "output": "<p>Check out /samples/c/squares.c in your OpenCV distribution. This example provides a square detector, and it should be a pretty good start. </p>\n\n<p>My answer <a href=\"https://stackoverflow.com/questions/7263621/how-to-find-corners-on-a-image-using-opencv\">here</a> also applies.</p>\n",
270
- "system": "you are the stackoverflow assistant"
271
- },
272
- {
273
- "instruction": "Two images similar - Measurement",
274
- "input": "",
275
- "output": "<p>I'm pretty sure that a global measure on images will not work alone. Particularly, to find out whether the numbers are the same will not work with an image-wide measure.\nMy first guess would be, to make at least a three step procedure.</p>\n\n<h2>1. Find the black marks in the corner</h2>\n\n<p>I don't know how much background you have on image registration, but this should probably work with a very light version. Search for every black mark separately. First you smooth your image slightly (may a Gaussian with r=1) to suppress noise and then you binarize your image with a very low threshold to kill almost every \"non-black\" part. This looks in a good image like this</p>\n\n<p><img src=\"https://i.sstatic.net/c85b1.png\" alt=\"enter image description here\"></p>\n\n<p>You see that those marks with nothing around were made exactly for those applications. Now, the easiest version I can imagine is maybe something like that: Take a reasonable region of each corner. <em>Reasonable</em> means, the black mark should definitely be in there. Now you calculate the <em>mean of all black pixel positions</em>. This is your guess for the center of one black mark. In the case above, this guess is perfect and should give the real center. However, if the image is of bad quality, there are maybe noise/wrong black pixels in this region.</p>\n\n<p>In this case a <em>rigid registration</em> step could be required. Since you only try to find black squares, this registration should be not really hard to implement.</p>\n\n<h2>2. Finding the numbers</h2>\n\n<p>When you found the black corner-marks, you know that these span a rectangle and you can calculate the regions where the important numbers are. If <em>good</em> images are really only off by some pixel, then this step is maybe unnecessary and you can take fixed regions for your numbers.</p>\n\n<p>For instance, see the lower right number: it is always directly between the bottom-marks, on the right side ranging from say 1/4 of the length between the bottom-marks to almost the right mark. You have to measure this region better, but given your mark-positions, you can express this region depending on the black marks.</p>\n\n<h2>3. Comparing the number-regions (character recognition)</h2>\n\n<p>As mentioned in the beginning, I'm not sure whether comparing two regions of numbers by an image-measure for the whole region is good enough. You can try of course and see whether it works. But I would be suspicious of whether the difference between a 1 and a 7 is so large, that you can say for sure, <em>those regions match</em> or <em>they don't match</em>.</p>\n\n<p>Since you can extract the region with the number quite good, why not using some basic OCR and comparing not the image of numbers, but the numbers itself?</p>\n",
276
- "system": "you are the stackoverflow assistant"
277
- },
278
- {
279
- "instruction": "How to configure visual studio 2010 with opencv cvblobs library",
280
- "input": "",
281
- "output": "<p>For the moment there is no pre-built <a href=\"http://cvblob.googlecode.com/files/cvblob-0.10.3-src.zip\" rel=\"nofollow\">cvBlob</a> for Visual Studio, so you are going to have to compile it yourself.</p>\n\n<p>So before you start, make sure you have <a href=\"http://ufpr.dl.sourceforge.net/project/opencvlibrary/opencv-win/2.3.1/OpenCV-2.3.1-win-superpack.exe\" rel=\"nofollow\">OpenCV</a> installed in your computer.</p>\n\n<p><strong><a href=\"http://www.cmake.org/files/v2.8/cmake-2.8.6-win32-x86.exe\" rel=\"nofollow\">Download</a> <a href=\"http://www.cmake.org/cmake/help/runningcmake.html\" rel=\"nofollow\">CMake for Windows</a>. CMake creates the Visual Studio project files that are needed to compile cvBlob.</strong></p>\n\n<p>Once you open <strong>cmake-gui</strong>, fill in the edit boxes <em>\"Where is the source code\"</em> and <em>\"Where to build the binaries\"</em> accordingly (adjust these to your settings):</p>\n\n<ul>\n<li>C:/Documents and Settings/user/Meus documentos/Downloads/cvblob-0.10.3-src/cvblob </li>\n<li>C:/Documents and Settings/user/Meus documentos/Downloads/cvblob-0.10.3-src/cvblob/build</li>\n</ul>\n\n<p>Note: the <strong>build</strong> folder was created manually.</p>\n\n<p>Click on button <strong>Configure</strong> to check for dependencies (CMake will try to find OpenCV on your computer) and then on <strong>Generate</strong> so it can generate the Visual Studio project files.</p>\n\n<p>From here on it's the standard compilation procedures.</p>\n",
282
- "system": "you are the stackoverflow assistant"
283
- },
284
- {
285
- "instruction": "Rgb to grayscale conversion with arm neon",
286
- "input": "",
287
- "output": "<p>You should use <code>\"vld3.8 {d0-d2}, [%1]! \\n\"</code></p>\n\n<p>See also <a href=\"http://hilbert-space.de/?p=22\" rel=\"nofollow\">http://hilbert-space.de/?p=22</a></p>\n",
288
- "system": "you are the stackoverflow assistant"
289
- },
290
- {
291
- "instruction": "How to undistort points in camera shot coordinates and obtain corresponding undistorted image coordinates?",
292
- "input": "",
293
- "output": "<p>The points should be \"unnormalized\" using camera matrix.</p>\n\n<p>More specifically, after call of <code>cvUndistortPoints</code> following transformation should be also added:</p>\n\n<pre><code>double fx = CV_MAT_ELEM(camera1, double, 0, 0);\ndouble fy = CV_MAT_ELEM(camera1, double, 1, 1);\ndouble cx = CV_MAT_ELEM(camera1, double, 0, 2);\ndouble cy = CV_MAT_ELEM(camera1, double, 1, 2);\n\nfloat * dst_ptr = (float*)dst-&gt;data.ptr;\nfor (int pi = 0; pi &lt; npoints; ++pi) {\n float&amp; px = *(dst_ptr + pi * 2);\n float&amp; py = *(dst_ptr + pi * 2 + 1);\n // perform transformation. \n // In fact this is equivalent to multiplication to camera matrix\n px = px * fx + cx;\n py = py * fy + cy;\n}\n</code></pre>\n\n<p>More info on camera matrix at OpenCV <a href=\"http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html\" rel=\"noreferrer\">'Camera Calibration and 3D Reconstruction'</a></p>\n\n<p><strong>UPDATE:</strong></p>\n\n<p>Following C++ function call should work as well: </p>\n\n<pre><code>std::vector&lt;cv::Point2f&gt; inputDistortedPoints = ...\nstd::vector&lt;cv::Point2f&gt; outputUndistortedPoints;\ncv::Mat cameraMatrix = ...\ncv::Mat distCoeffs = ...\n\ncv::undistortPoints(inputDistortedPoints, outputUndistortedPoints, cameraMatrix, distCoeffs, cv::noArray(), cameraMatrix);\n</code></pre>\n",
294
- "system": "you are the stackoverflow assistant"
295
- },
296
- {
297
- "instruction": "How to set up element in multichannel matrix via CV_MAT_ELEM?",
298
- "input": "",
299
- "output": "<p>I have worked around usage of <code>CV_MAT_ELEM</code>:</p>\n\n<pre><code>float * src_ptr = (float*)src-&gt;data.ptr;\n*(src_ptr + 1 * 2 + 1) = 123;\n</code></pre>\n",
300
- "system": "you are the stackoverflow assistant"
301
- }
302
- ]