Basic Informations
C.V
Heba Hamdy Ali Hussien
10 Abd El Azez Talat Harb St. beside Ahly Club– 6th Zone
Naser City, Cairo, Egypt. Cairo, Egypt 11231
E-Mail: heba.h.ali@fcis.bsu.ed
Mobile: (20)10 010 29374
Objective:
Seeking a position as Teaching Assistant.
Work Experience
Jul 2013 - Present [Beni-Suef University in Egypt]:
? Position: Lecturer Assistant (full-Time).
? Department: Multimedia.
? Teaching:
- Discrete Math, Introduction to Computers, Computer Programming.
- Multimedia ,Web programming
- Image Processing, Computer Vision
- Artificial Intelligence (AI), Pattern Recognition
September 2012 – June 2013 [Future University in Egypt]:
? Position: Lecturer Assistant (full-Time).
? Website: Expertwave
? Teaching:
- Discrete Math ,Programming, Software Engineering, E-learning
- Image Processing, Logic Programming, Artificial Intelligence (AI)
- Natural language processing (NLP).
September 2011 - August 2012 [Hubspot]:
? Position: Freelancer UI developer.
? Job role:
- Development websites using CSS, HTML, JQuery, JavaScript.
September 2010- August 2011 [Expert Wave]:
? Position: Software Engineering Trainer in Expertwave Company.
? Website: www.expertwave.com
? Job role: Teaching courses:
- Unite testing, ISQTP, CSDP/CSDP
- Material preparer:
September 2004 - Dec 2010 [High Institute of Computer Science and Information
system-New Cairo]:
? Position: Teacher Assistant
? Job role:
- Distributed Systems, Programming C/C++/C#, Advanced Visual Basic
- Graphics, Data Structure, Database Concepts and Database management
system, Advanced Operating Systems
- Graduation Project SupervisionHeba Hamdy Ali Hussien
10 Abd El Azez Talat Harb St. beside Ahly Club– 6th Zone
Naser City, Cairo, Egypt. Cairo, Egypt 11231
E-Mail: heba.h.ali@fcis.bsu.ed
Mobile: (20)10 010 29374
Objective:
Seeking a position as Teaching Assistant.
Work Experience
Jul 2013 - Present [Beni-Suef University in Egypt]:
? Position: Lecturer Assistant (full-Time).
? Department: Multimedia.
? Teaching:
- Discrete Math, Introduction to Computers, Computer Programming.
- Multimedia ,Web programming
- Image Processing, Computer Vision
- Artificial Intelligence (AI), Pattern Recognition
September 2012 – June 2013 [Future University in Egypt]:
? Position: Lecturer Assistant (full-Time).
? Website: Expertwave
? Teaching:
- Discrete Math ,Programming, Software Engineering, E-learning
- Image Processing, Logic Programming, Artificial Intelligence (AI)
- Natural language processing (NLP).
September 2011 - August 2012 [Hubspot]:
? Position: Freelancer UI developer.
? Job role:
- Development websites using CSS, HTML, JQuery, JavaScript.
September 2010- August 2011 [Expert Wave]:
? Position: Software Engineering Trainer in Expertwave Company.
? Website: www.expertwave.com
? Job role: Teaching courses:
- Unite testing, ISQTP, CSDP/CSDP
- Material preparer:
September 2004 - Dec 2010 [High Institute of Computer Science and Information
system-New Cairo]:
? Position: Teacher Assistant
? Job role:
- Distributed Systems, Programming C/C++/C#, Advanced Visual Basic
- Graphics, Data Structure, Database Concepts and Database management
system, Advanced Operating Systems
- Graduation Project Supervision
Master Title
A Framework for Dynamic Hand Gesture Tracking
Master Abstract
Hand gestures enable deaf people to communicate during their daily lives rather than by speaking. A sign language is a language which uses visually transmitted gesture signs. This language combines hand shapes, orientation and movement of the hands simultaneously. Also used arms, lip-patterns, body movements and facial expressions to express the speaker's thoughts. recently, recognizing and documenting Arabic sign language has got a great attention. There have been few attempts to develop recognition systems to allow deaf people to interact with the rest of society.
The proposed system introduces an automatic Arabic Sign Language (ArSL) recognition system based on the Hidden Markov Models (HMMs). A large set of samples has been used to recognize 20 isolated words from the Standard Arabic sign language. The proposed system is signer-independent. Experiments are conducted using real ArSL videos taken for deaf people in different clothes and with different skin colors. The proposed system achieves an overall recognition rate 91.39%.
PHD Title
Depth-based Human Activity Recognition
PHD Abstract
Human activity recognition is an important area of computer vision research today. The goal of human activity recognition is to automatically analyze ongoing activities from an unknown video. The ability to recognize complex human activities from videos enables the construction of several important applications. Automated surveillance systems in public places like airports and subway stations require detection of abnormal, as opposed to normal activities. Recognition of human activities also enables the real-time monitoring of patients, children, and elderly persons. Compared to visual data, depth maps provide metric, instead of projective, measurements of the geometry that are invariant to lighting. However, designing both effective and efficient depth sequence representations for action recognition is a challenging task.
In this dissertation, a depth-base human activity recognition framework is conducted in real-time and benchmark datasets. The framework is evaluated using Support Vector Machine and K-nearest neighbor algorithms. Based on results obtained in the evaluation, the real-time framework utilized Support Vector Machine for human action recognition by Kinect sensor. The framework achieves 94.63% on MSR-action dataset and 95% in real-time direction.
Also, a compact discriminative feature descriptor is proposed to extract spatial- temporal features from depth sequence. Evaluation results showed that the proposed Statistical HOG on Multi-temporal depth motion maps approach which used to classify the human activities outperforms the pervious depth-based methods. The proposed approach is evaluated using L2-Collaborative Representation Classifier, Support Vector Machine and K-nearest neighbor based on the recognition accuracy against two public datasets. L2-Collaborative Representation Classifier is utilized showed that our proposed approach able to recognize human activity by 97.93% and 95.97% for MSR Action3D, MSR Gesture3D respectively.