Publications and papers from and about the See Far project focusing on supporting the ageing workforce with vision deficiencies in Europe.

 


COLET: A Dataset for Cognitive WorkLoad Estimation Based on Eye-Tracking

Ktistakis, Emmanouil and Skaramagkas, Vasileios and Manousos, Dimitris and Tachos, Nikolaos S. and Tripoliti, Evanthia and Fotiadis, Dimitrios I. and Tsiknakis, Manolis,  SSRN: https://ssrn.com/abstract=4059768 or http://dx.doi.org/10.2139/ssrn.4059768

May 2022

Abstract

The cognitive workload is an important component in performance psychology, ergonomics, and human factors. Unfortunately, publicly available datasets are scarce, making it difficult to establish new approaches and comparative studies. In this work, COLET-COgnitive workLoad estimation based on Eye-Tracking dataset is presented. Forty-seven (47) individuals’ eye movements were monitored as they solved puzzles involving visual search tasks of varying complexity and duration. The authors give an in-depth study of the participants’ performance during the experiments while eye and gaze features were derived from low-level eye recorded metrics, and their relationships with the experiment tasks were investigated. The results from the classification of cognitive workload levels solely based on eye data, by employing and testing a set of machine learning algorithms are also provided. The dataset is available to the academic community.

 

Note:
Funding Information: This project has received funding from the European Unionasˆ Horizon 2020 research and innovation program under grant agree ment No 826429 (Project: SeeFar).

Declaration of Interests: None to declare.

Ethics Approval Statement: The experimental protocol (110/12-02-2021) was submitted and approved by the Ethical Committee of the Foundation for Research and Technology Hellas (FORTH). Subsequently, all participants read and signed an Information Consent Form.

Keywords: Cognitive workload, Workload classification, Eye movements, Machine Learning, Eye-tracking, Affective computing

Download This paper


Real Time Human Activity Recognition Using Acceleration and First-Person Camera data

Christos AndroutsosNikolaos S TachosEvanthia E TripolitiIoannis KaratzanisDimitris ManousosManolis TsiknakisDimitrios I Fotiadis

Abstract

The aim of this work is to present an automated method, working in real time, for human activity recognition based on acceleration and first-person camera data. A Long-Short-Term-Memory (LSTM) model has been built for recognizing locomotive activities (i.e. walking, sitting, standing, going upstairs, going downstairs) from acceleration data, while a ResNet model is employed for the recognition of stationary activities (i.e. eating, reading, writing, watching TV working on PC). The outcomes of the two models are fused in order for the final decision, regarding the performed activity, to be made. For the training, testing and evaluation of the proposed models, a publicly available dataset and an “in-house” dataset are utilized. The overall accuracy of the proposed algorithmic pipeline reaches 87.8%.


Exploring Artificial Intelligence methods for recognizing human activities in real time by exploiting inertial sensors

Dimitrios BoucharasChristos AndroutsosNikolaos S. TachosEvanthia E. TripolititDimitrios ManousosVasileios SkaramagkasEmmanouil Ktistakis; Manolis Tsiknakis; Dimitrios I. Fotiadis

Date of Conference: 25-27 Oct. 2021
Date Added to IEEE Xplore15 December 2021
Publisher: IEEE

Abstract

The aim of this work is to present two different algorithmic pipelines for human activity recognition (HAR) in real time, exploiting inertial measurement unit (IMU) sensors. Various learning classifiers have been developed and tested across different datasets. The experimental results provide a comparative performance analysis based on accuracy and latency during fine-tuning, training and prediction. The overall accuracy of the proposed pipeline reaches 66 % in the publicly available dataset and 90% in the in-house one.


Cognitive workload level estimation based on eye tracking: A machine learning approach

Vasileios SkaramagkasEmmanouil KtistakisDimitris ManousosNikolaos S. TachosEleni KazantzakiEvanthia E. TripolitiDimitrios I. FotiadisManolis Tsinakis

Date of Conference: 25-27 Oct. 2021
Date Added to IEEE Xplore15 December 2021
Publisher: IEEE

Abstract

Cognitive workload is a critical feature in related psychology, ergonomics, and human factors for understanding performance. However, it still is difficult to describe and thus, to measure it. Since there is no single sensor that can give a full understanding of workload, extended research has been conducted in order to present robust biomarkers. During the last years, machine learning techniques have been used to predict cognitive workload based on various features. Gaze extracted features, such as pupil size, blink activity and saccadic measures, have been used as predictors. The aim of this study is to use gaze extracted features as the only predictors of cognitive workload. Two factors were investigated: time pressure and multi tasking. The findings of this study showed that eye and gaze features are useful indicators of cognitive workload levels, reaching up to 88% accuracy.


A machine learning approach to predict emotional arousal and valence from gaze extracted features

Vasileios SkaramagkasEmmanouil KtistakisDimitris ManousosNikolaos S. TachosEleni KazantzakiEvanthia E. TripolitiDimitrios I. FotiadisManolis Tsiknakis

Date of Conference: 25-27 Oct. 2021
Date Added to IEEE Xplore15 December 2021
Publisher: IEEE

Abstract

In the last years, many studies have been investigating emotional arousal and valence. Most of them have focused on the use of physiological signals such as EEG or EMG, cardiovascular measures or skin conductance. However, eye related features have proven to be very helpful and easy to use metrics, especially pupil size and blink activity. The aim of this study is to predict emotional arousal and valence levels which are induced during emotionally charged situations from eye related features. For this reason, we performed an experimental study where the participants watched emotion-eliciting videos and self-assessed their emotions, while their eye movements were being recorded. In this work, several classifiers such as KNN, SVM, Naive Bayes, Trees and Ensemble methods were trained and tested. Finally, emotional arousal and valence levels were predicted with 85 and 91% efficiency, respectively.


Deep learning for diabetic retinopathy detection and classification based on fundus images: A review

NikosTsiknakisaDimitrisTheodoropoulosbGeorgiosManikisaEmmanouilKtistakisacOuraniaBoutsoradAlexaBertoeFabioScarpaefAlbertoScarpaeDimitrios I.FotiadisghKostasMariasab

a Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), 70013, Heraklion, Greece
b Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71004, Heraklion, Greece
c Laboratory of Optics and Vision, School of Medicine, University of Crete, 71003, Heraklion, Greece
d General Hospital of Ioannina, 45445, Ioannina, Greece
e D-Eye Srl, 35131, Padova, Italy
f Department of Information Engineering, University of Padova, 35131, Padova, Italy
g Department of Biomedical Research, Institute of Molecular Biology and Biotechnology, FORTH, 45115, Ioannina, Greece
h Department of Materials Science and Engineering, Unit of Medical Technology and Intelligent Information Systems, University of Ioannina, 45110, Ioannina, Greece

Received 7 May 2021,
Revised 12 June 2021,
Accepted 18 June 2021,
Available online 25 June 2021.
https://doi.org/10.1016/j.compbiomed.2021.104599

Abstract

Diabetic Retinopathy is a retina disease caused by diabetes mellitus and it is the leading cause of blindness globally. Early detection and treatment are necessary in order to delay or avoid vision deterioration and vision loss. To that end, many artificial-intelligence-powered methods have been proposed by the research community for the detection and classification of diabetic retinopathy on fundus retina images. This review article provides a thorough analysis of the use of deep learning methods at the various steps of the diabetic retinopathy detection pipeline based on fundus images. We discuss several aspects of that pipeline, ranging from the datasets that are widely used by the research community, the preprocessing techniques employed and how these accelerate and improve the models’ performance, to the development of such deep learning models for the diagnosis and grading of the disease as well as the localization of the disease’s lesions. We also discuss certain models that have been applied in real clinical settings. Finally, we conclude with some important insights and provide future research directions.

Keywords

Artificial intelligence
Classification
Deep learning
Detection
Diabetic retinopathy
Fundus
Retina
Review
Segmentation

CONCERTED RESPONSE TO THE GREEN PAPER ON AGEING

 See Far project together with three other Horizon 2020 projects: Ageing@WorksustAGECO-ADAPT  and led by SmartWork provided the comments to the EU Green Paper for Ageing that was opened for consultations until 21 April.

response to green paper on ageing

From the response, here are some principles we would suggest to be followed when promoting the technological solutions for “Making the most out of our working lives”:

• The technologies should be designed in such a way that they are based on the needs of the users and take into account their available skills and resources (in terms of financial, material and digital skills).

• As the office workers have a problem to re-enter the labour market because of their lack of digital skills to use technological solutions, adequate, specific and adapted training(s) should be offered.

• More effort should be put to promoting the positive aspects of a longer working experience and reskilling among the older people, including health and wellbeing, financial and social aspects.

• Cross-sectoral action and responsibility should be promoted, where the industry has a crucial role in not only providing but also adapting technologies and tools introduced to their working environment to the needs of older employees.

• The deployment of technologies should take into account the socio-economic aspects, i.e. the availability of the devices and the internet, and their cost, as well as health and digital literacy of the users.


Review of eye tracking metrics involved in emotional and cognitive processes

Abstract

Eye behaviour provides valuable information revealing one’s higher cognitive functions and state of affect. Although eye tracking is gaining ground in the research community, it is not yet a popular approach for the detection of emotional and cognitive states. In this paper, we present a review of eye and pupil tracking related metrics (such as gaze, fixations, saccades, blinks, pupil size variation, etc.) utilized towards the detection of emotional and cognitive processes, focusing on visual attention, emotional arousal and cognitive workload. Besides, we investigate their involvement as well as the computational recognition methods employed for the reliable emotional and cognitive assessment. The publicly available datasets employed in relevant research efforts were collected and their specifications and other pertinent details are described. The multimodal approaches which combine eye-tracking features with other modalities (e.g. biosignals), along with artificial intelligence and machine learning techniques were also surveyed in terms of their recognition/classification accuracy. The limitations, current open research problems and prospective future research directions were discussed for the usage of eye-tracking as the primary sensor modality. This study aims to comprehensively present the most robust and significant eye/pupil metrics based on available literature towards the development of a robust emotional or cognitive computational model.


Concerted paper between partners and projects: THE ROLE OF AI TECHNOLOGIES IN WORKING THROUGH COVID-19 AND ITS AFTERMATH

See Far project, together with other six projects have worked recently on the non-scientific paper regarding the role of AI technologies in working through COVID-19 and its aftermath. This close collaboration was led by

SmartWork project http://www.smartworkproject.eu/

And joint by:

Ageing@Work: https://ageingatwork-project.eu/

BIONIC https://bionic-h2020.eu/

CO-ADAPT https://coadapt-project.eu/

See Far https://www.see-far.eu/

sustAGE https://www.sustage.eu/

WorkingAge https://www.workingage.eu/

See Far project had an active effort and role to elaborate and write about the digital solutions and systems to the Covid – 19 implication in different work environments.

DOWNLOAD THE PAPER

 

THE ROLE OF AI TECHNOLOGIES IN WORKING THROUGH COVID-19 AND ITS AFTERMATH

THE ROLE OF AI TECHNOLOGIES IN WORKING THROUGH COVID-19 AND ITS AFTERMATH

The main aims of the concerted paper are to:

  • reflect and share about the COVID-19 implications to the work environments, now that teleworking turned into a main instrument and necessity for us all;
  • understand how the digital solutions and systems could be developed, adapted, optimized or applied to better respond to the pandemic context challenges.

The contributions collected followed three proposed guiding questions, although the contributing partners were free to explore other relevant aspects, in their understanding:

  • How can technology apply to the work environment be leveraged to respond to the emerging challenges raised by COVID-19?
  • Are there changes to the actual priorities and needs, considering the pandemic situation?
  • Is this an opportunity for projects such as SmartWork to underline the needs to introduce a digital revolution in the workplace?

During the next pages, all the different contributions are aligned. The individual contributions and opinions of SmartWork partners are constituting Chapter 1. Chapter 2 collects the contributions provided by the different projects that were funded under the same call.

Some similarities can be highlighted:

  • The desire to leverage the existing knowledge to rapidly respond to the challenges of this new (even if hopefully temporary) era;
  • The understanding of the challenges ahead and the will to overcome them collectively.

 

The collective first step is already the creation of this common paper. It is surely a positive sign for the future. But also some conclusions can be drawn:

  • COVID-19 impacted the way we live, work and spend free time severely,
  • It has effects on physical and mental health and wellbeing,

AI can play a great role in providing solutions, not only during the emergency but also in the long-term, and not only for the office workers but also for the more traditional industries.


Sensor-based behavioral informatics in support of Health Management and Care

BHI/ BSN 2020 Special Session

Evanthia Tripoliti, Senior Member, IEEE, Dimitrios I. Fotiadis,Fellow, IEEE

Abstract

The aim of this work is to present the behaviour profiling approach followed in the See Far solution in order to provide adaptive personalized interaction of the See Far smart
glasses with the users (i.e. ageing workforce with vision impairments) in the working environment. The behaviour profiling approach includes the creation of the personalized profile and the association of the profile with the triggering of personalized See Far solution services. The personalized profile of the See Far solution is a multidimensional profile consisting of medical, demographic, physical activity, emotional and behavioral components. The components of the personalized
profile can be grouped in two basic parts, static and dynamic ones. The information that composes each one of these components of the profile are extracted by the raw data provided by the sensory ecosystem of the See Far solution sensors in the See Far smartphone and the See Far smart glasses). For the formation of the dynamic part of the profile deep learning and conventional machine learning techniques are utilized. The personalized profile, through an intelligent component, determines the service of the See Far solution that should be provided to the user. The whole process is performed
in the See Far smart glasses in real time.


See Far Architecture – A Digitally Enabled Adaptive Solution Supporting Ageing Workforce With Vision Deficiencies

IEEE EMBC 2020

| Nikolaos S. Tachos, Member IEEE, Evanthia E. Tripoliti, Member IEEE, Ioannis Karatzanis, Angus V. D. Kintis, Alberto Scarpa, Giulio Galiazzo, Giacomo Garoffolo, Nicola Spolador, Javier Gutiérrez Rumbao, Alfonso Guerrero de Mier, Ramón González Carvajal, Fellow IEEE, Dimitrios I. Fotiadis

Abstract

The aim of this work is to present the architecture of the See Far solution. See Far is a digitally enabled adaptive solution supporting the ageing workforce with vision deficiencies to remain actively involved in their professional life, helping them to sustain and renew their work and personal life related skills and support independent, active and healthy lifestyles.  The See Far solution consists of two components: (i) the See Far smart glasses and the (ii) See Far mobile application. The See Far smart glasses are adapted to the needs of the users and optimize their view, using a personalized visual assistant and the integration of Augmented Reality (AR) technologies. The See Far mobile application allows the monitoring of the central vision evolution and the assessment of the progression of the severity of vision impairments, such as age-related macular degeneration (AMD), diabetic retinopathy, glaucoma, cataract and presbyopia, based on Machine Learning (ML) techniques.

link


Development of an automated analysis for glaucoma screening of videos acquired with smartphone ophthalmoscope

ARVO Annual Meeting Abstract  |   June 2020
Abstract

Purpose : Widespread screening is crucial for early diagnosis and treatment of glaucoma. The development of smartphone-based ophthalmoscopes represents a resource, since they are low cost and portable solutions able to imaging the optic disc easily. Acquired images have a field of view and a quality lower than conventional fundus cameras images. Therefore, standard image processing algorithms are not designed for this new type of images. Thus, we propose a completely automated analysis of videos acquired with the D-EYE smartphone-based ophthalmoscope, capable to assist ophthalmologists in performing diagnosis. This analysis is going to be integrated and tested in the See Far European Project (#826429).

Methods : 1000 frames were selected from 30 videos (from different healthy subjects) acquired with the D-EYE system. The optic disc was manually segmented. These images (1080×1920 px resized to 512×1024 px) were used as training set for a u-shaped convolutional neural network (U-Net), with 4 blocks for both the encoder and decoder path to segment the optic disc. After training, the U-Net analyzes each frame of a video. Optic disc area and focus are used to select the best frame. Then, the best frame is cropped around the optic disc, resized to 512×512 px, and the cup segmentation is performed by a second U-Net. It was trained on the 750 images of the RIGA dataset, cropped around the optic disc, resized to 512×512 px, and blurred with Gaussian filters so as to have same quality of images from D-EYE system. Finally, the algorithm derives the VCDR (vertical cup-to-disc ratio).

Results : On 5 healthy subjects and 5 subjects with glaucoma, a perfect agreement with manual analysis was obtained in the best frame selection and an accuracy ≥96% was obtained for both disc and cup segmentation. Finally, the computed VCDR denotes a substantial difference between the two groups of subjects: VCDR<0.40 for healthy subjects, VCDR>0.45 for subjects with glaucoma.

Conclusions : We developed a proof-of-concept automated analysis of videos acquired with the D-EYE system (Fig1). The proposed analysis can provide visual and quantitative information that assist ophthalmologists. Indeed, clinicians can look at a single image rather than an entire video and can easily verify the reliability of the automated analysis. Results encourage the further development of the proposed method and its investigation on a large dataset.

This is a 2020 ARVO Annual Meeting abstract.