User:Iiris.Lüsi/tegemised: Difference between revisions

From Intelligent Materials and Systems Lab
No edit summary
No edit summary
 
(4 intermediate revisions by the same user not shown)
Line 56: Line 56:


This week I worked on fixing my contour merging function. Somehow it managed to crash Windows too. However after thoroughly reorganizing the thing and changing public variabled to private variables, I managed to get it to work. However I discovered that the structure of contours is quite different from what I expected. So I settled for a different approach in the merging department. However after reorganizing the points in the contour, I should be able to extract corners, upper lip middle quite easily. So that is my goal for next week. I am also going to make the rectangle that viola-jones gives, because it is sometimes too small leaving out some of the important bits. Like corners for example.
This week I worked on fixing my contour merging function. Somehow it managed to crash Windows too. However after thoroughly reorganizing the thing and changing public variabled to private variables, I managed to get it to work. However I discovered that the structure of contours is quite different from what I expected. So I settled for a different approach in the merging department. However after reorganizing the points in the contour, I should be able to extract corners, upper lip middle quite easily. So that is my goal for next week. I am also going to make the rectangle that viola-jones gives, because it is sometimes too small leaving out some of the important bits. Like corners for example.
<br><br><br>
9. 04-15.04
This week I spent some time preparing for the presetaion we had on Friday. However most of my efforts were spent on trying to extract the useful bits from the contours. I proceeded to pick out only one of the y values from the contours for every other x coordinate. This way I managed to ectract some kind of line, that can be used to find local maximal and minimal values, to get the upperlip middle point. This acually worked quite well on my lips, but had some more noise with other people. In order to achieve that line I also wrote some kind of a basic smoothing function. All in all, the point ectraction sometimes works and sometimes doesn’t, because somehow the contour extraction with opencv is not that stable.
Next week I plan on reading a lot of papers to widen my horizon in the face recognition field. I aslo need to learn some things about webdesing as I will need to make areas on the poll selectable.
Period: Apr. 15 – Apr. 22, 2015<br>
Last week I mainly focused on two tasks. First I had to make a HTML code for a clickable image that would highlight sections on hover. As I knew nothing about HTML, CSS or Javascript it took me quite a while to google and find the best option.
The rest of the week I spent on reading six papers in the area of 3D modelling and facial feature detection.
Period: Apr 22-apr 29, 2015<br>
This week I worked on preparing for a presentation for Einar. The rest of the time I wrote my paper. I started with the literature review part of this process and read some papers. Managed to write most of the proposed method.
<br>
<br>
<br>
Period: Ap 30-May 6, 2015 <br>
This week I finished writing my paper. First I finished reading the literarture review papers and wrote the introduction. Then I applied the changed and corrections that Shahab gave me. Also added the bibliography and a lot of formulas to the proposed method. After that, I proceeded to write an experimental result section, where I added some images. Then applied some more corrections, finished off all the sections and wrote a conclusion. Also added all citations.
This week I focused on
Period: may 14-20
This week I read two more papers on the topic of my research. In the first one they used viola-jones to find landmark points. They also described a profile image for each landmark point. Which means they compared the area around the point and compared it to what it should be. Based on these landmark points they sonstructed a simple active shape model using a few vectors. Also landmark positions are corrected based on their location in relation to eachother. They tested the algorithm by training it on one person and using it one the same person. Their tracking did not work very well with a sad expression as the mouth shape was complex.
In the second paper they first built a general deformable 3d face model based on a hundred scans using PCA.  Then they use Parallel Tracking and Mapping on a video sequence to obtain face characteristics. Different poses are stereo-initialized from different views and a sparse point cloud is calculated. After that they are further adjusted. During the scanning proces they used Lucas Kanade optical flow algorithm for tracking. In the end based on the video sequence a 3D model was reconstructed.
I also worked on my program and tried to extract the lower lip outline. However the beginning point is not as easily pintpointed as with the upper lip and there is some noise in the edge image so I did not manage to extract it well enough yet. But with some adjustments it should be better.
Period: May, 12-May, 27.
This week I tried to use a cosine function to estimate the points on the lip, as the edge information gives insufficent information at most frames. This worked well, if at least half of the lips was well pronounced. I tried different parameters and chose the ones so that the greatest number of points would lay on the cosine. After that i tried tracking these points. The optical flow tracking was able to retain some of the points for quite a long time in most cases. However I was not yet able to create any updated cosine functions based on the tracking data, yet. Also it was brought to my attention that tracking might not be the best way to proceed.
Next week I'll try to work on making my extraction method more stable and maybe use the frame by frame option.

Latest revision as of 08:59, 27 May 2015

14.09-21.09:math behind Image Superresolution
22.09-29.09:Matrix decomposition
30.09-07.10dimensionality reduction classic methods
08.10-15.10:dimensionality reduction preparing more classic methods
16.10-22.10:dimensionality reduction presentation and newer methods
23.10-29.10:dimensionality reduction state of the art methods
30.10-05.11:discrete cosine transform
05.11-11.11:emotion recognition overview
12.11-19.11:emotion recognition: Viola-Jones for finding eyes and mouth
20.11-16.11:making viola-jones work better and trying to find eyecorners
27.11-03.12:finding mouth corners and using depth information to find nose, chin..
04.12-10.12:still trying to find mouth corners and trying a new approach on the depth information
11.12-17.12:used canny edge detection
17.12-07.01:blender tutorial videos, histogram equalization and blurring for my face program
08.01-14.01:blender help, tutorials, and face program reorganization
15.01-21.01:blender game engine tutorials
22.01-28.01:blender scripting start


Feb 19.-15.
This week I worked really hard on trying to fix my marker tracking program as since the lighting conditions had changed it didn't work well enough yet. After a lot of effort and doing pointless stuff I realised I just had to change saturation. I also figured out a kind of way to keep track of these movements. I also added some more errors and binding conditions for it to work a bit more better. I also tried to find best constants and directions for In the end the two programs worked together quite okay. So next I am going to try to use kinect camera. Depending on how well I can attach it to my program I can maybe start working on depth informationt

26.02-03.02:
This week I spent a lot of time trying to find a Java library that could interact with Kinect2. Sadly most of the libraries had been discontinued after Kinect1. I did find a J4K librayry that lets the user read the depth and color information. However using this the input data is a byte array which is slow in conversion to the data type that opencv uses. Also the Imshow module does not diplay everything properly. So for now my solution would be to write something in concern to that. Also I have realised that the fact that I am using Java is seriously slowing down my progress.
Also spent bigger part of friday test-solving the problem that Andres made for Armen.






05.03-11.03.2015

This week went by quite fast. First part of the week I tried to get my program too work with kinect as I had finally figured out some haphazard way to connect them. However while testing it I discovered that Kinect randomly shuts down and starts up again during quite short intervals. Thus making it kind of inusable. I also noticed that the color ranges of my markers had changed in comparison to the webcamera. Since I'm waiting for my new computer I spent some time on using opencv already implemented method to find mouthcorners in the greyscale image. I also spent some time on homework





19.03-25.03.2015

I tried Harris corner detection on the mouth and even though it looked bad to me, Shahab said that it was good enough and that I should also try using line detection. I used Canny edge detector on the mouth. However I have yet to manage to use the line detector on that output as it took me a while to understand the algorithm in order to provide proper input. I also spent some time working on our image processing course project and we also wrote the proposal. For next week I plan on trying out the optical flow feature in opencv to achieve better marker tracking. I also will try to combine different methods to find the mouth corners.




12.03-18.03.2015

This week started with the open doors day. So we spent some time showing students around TUTI. I managed to use the opencv optical flow feature to achieve a nice and efficent tracking of the markers on lips. At first I tried it on the greayscale image, but it was a bit jumpy. So I proceeded to use it on the thresholded image for a slightly better result. Also I tried out a lot of different parameters for the canny edge detecter for it to show the best result. Then I proceeded to use the opencv findContours function that I also used for blobdetection with markers. This kind of method was kind of blinky, but it could find the lip contour quite nicely. However I was plannning on trying it out a bit more and asking Shahab about it too. I also tried to dig into the Kinnect SDK as it seems to have a marvellous method for finding mouth corners. My goals for next week is to quickly fix my filewriting and python script to adapt to the improved marker tracking program. Also hopefully I will be able to find a working method for the lip-corners.





12.03-18.03.2015

This week I worked on fixing my contour merging function. Somehow it managed to crash Windows too. However after thoroughly reorganizing the thing and changing public variabled to private variables, I managed to get it to work. However I discovered that the structure of contours is quite different from what I expected. So I settled for a different approach in the merging department. However after reorganizing the points in the contour, I should be able to extract corners, upper lip middle quite easily. So that is my goal for next week. I am also going to make the rectangle that viola-jones gives, because it is sometimes too small leaving out some of the important bits. Like corners for example.





9. 04-15.04 This week I spent some time preparing for the presetaion we had on Friday. However most of my efforts were spent on trying to extract the useful bits from the contours. I proceeded to pick out only one of the y values from the contours for every other x coordinate. This way I managed to ectract some kind of line, that can be used to find local maximal and minimal values, to get the upperlip middle point. This acually worked quite well on my lips, but had some more noise with other people. In order to achieve that line I also wrote some kind of a basic smoothing function. All in all, the point ectraction sometimes works and sometimes doesn’t, because somehow the contour extraction with opencv is not that stable. Next week I plan on reading a lot of papers to widen my horizon in the face recognition field. I aslo need to learn some things about webdesing as I will need to make areas on the poll selectable.



Period: Apr. 15 – Apr. 22, 2015
Last week I mainly focused on two tasks. First I had to make a HTML code for a clickable image that would highlight sections on hover. As I knew nothing about HTML, CSS or Javascript it took me quite a while to google and find the best option. The rest of the week I spent on reading six papers in the area of 3D modelling and facial feature detection.


Period: Apr 22-apr 29, 2015
This week I worked on preparing for a presentation for Einar. The rest of the time I wrote my paper. I started with the literature review part of this process and read some papers. Managed to write most of the proposed method.


Period: Ap 30-May 6, 2015
This week I finished writing my paper. First I finished reading the literarture review papers and wrote the introduction. Then I applied the changed and corrections that Shahab gave me. Also added the bibliography and a lot of formulas to the proposed method. After that, I proceeded to write an experimental result section, where I added some images. Then applied some more corrections, finished off all the sections and wrote a conclusion. Also added all citations. This week I focused on



Period: may 14-20 This week I read two more papers on the topic of my research. In the first one they used viola-jones to find landmark points. They also described a profile image for each landmark point. Which means they compared the area around the point and compared it to what it should be. Based on these landmark points they sonstructed a simple active shape model using a few vectors. Also landmark positions are corrected based on their location in relation to eachother. They tested the algorithm by training it on one person and using it one the same person. Their tracking did not work very well with a sad expression as the mouth shape was complex. In the second paper they first built a general deformable 3d face model based on a hundred scans using PCA. Then they use Parallel Tracking and Mapping on a video sequence to obtain face characteristics. Different poses are stereo-initialized from different views and a sparse point cloud is calculated. After that they are further adjusted. During the scanning proces they used Lucas Kanade optical flow algorithm for tracking. In the end based on the video sequence a 3D model was reconstructed. I also worked on my program and tried to extract the lower lip outline. However the beginning point is not as easily pintpointed as with the upper lip and there is some noise in the edge image so I did not manage to extract it well enough yet. But with some adjustments it should be better.




Period: May, 12-May, 27. This week I tried to use a cosine function to estimate the points on the lip, as the edge information gives insufficent information at most frames. This worked well, if at least half of the lips was well pronounced. I tried different parameters and chose the ones so that the greatest number of points would lay on the cosine. After that i tried tracking these points. The optical flow tracking was able to retain some of the points for quite a long time in most cases. However I was not yet able to create any updated cosine functions based on the tracking data, yet. Also it was brought to my attention that tracking might not be the best way to proceed. Next week I'll try to work on making my extraction method more stable and maybe use the frame by frame option.