Browsing by Author "Doulah, Abul"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Automatic Ingestion Monitor Version 2 - A Novel Wearable Device for Automatic Food Intake Detection and Passive Capture of Food Images(IEEE, 2021) Doulah, Abul; Ghosh, Tonmoy; Hossain, Delwar; Imtiaz, Masudul H.; Sazonov, Edward; University of Alabama TuscaloosaUse of food image capture and/or wearable sensors for dietary assessment has grown in popularity. Active - methods rely on the user to take an image of each eating episode. "Passive" methods use wearable cameras that continuously capture images. Most of "passively" captured images are not related to food consumption and may present privacy concerns. In this paper, we propose a novel wearable sensor (Automatic Ingestion Monitor. AIM-2) designed to capture images only during automatically detected eating episodes. The capture method was validated on a dataset collected from 30 volunteers in the community wearing the AIM-2 for 24h in pseudo-free-living and 24h in a free-living environment. The AIM-2 was able to detect food intake over 10-second epochs with a (mean and standard deviation) Fl-score of 81.8 +/- 10.1%. The accuracy of eating episode detection was 82.7%. Out of a total of 180,570 images captured, 8,929 (4.9%) images belonged to detected eating episodes. Privacy concerns were assessed by a questionnaire on a scale 1-7. Continuous capture had concern value of 5.0 +/- 1.6 (concerned) while image capture only during food intake had concern value of 1.9 +/- 1.7 (not concerned). Results suggest that AIM-2 can provide accurate detection of food intake, reduce the number of images for analysis and alleviate the privacy concerns of the users.Item Early Detection of the Initiation of Sit-to-Stand Posture Transitions Using Orthosis-Mounted Sensors(2017-11-23) Doulah, Abul; Shen, Xiangrong; Sazonov, Edward; University of Alabama TuscaloosaAssistance during sit-to-stand (SiSt) transitions for frail elderly may be provided by powered orthotic devices. The control of the powered orthosis may be performed by the means of electromyography (EMG), which requires direct contact of measurement electrodes to the skin. The purpose of this study was to determine if a non-EMG-based method that uses inertial sensors placed at different positions on the orthosis, and a lightweight pattern recognition algorithm may accurately identify SiSt transitions without false positives. A novel method is proposed to eliminate false positives based on a two-stage design: stage one detects the sitting posture; stage two recognizes the initiation of a SiSt transition from a sitting position. The method was validated using data from 10 participants who performed 34 different activities and posture transitions. Features were obtained from the sensor signals and then combined into lagged epochs. A reduced number of features was selected using a minimum-redundancy-maximum-relevance (mRMR) algorithm and forward feature selection. To obtain a recognition model with low computational complexity, we compared the use of an extreme learning machine (ELM) and multilayer perceptron (MLP) for both stages of the recognition algorithm. Both classifiers were able to accurately identify all posture transitions with no false positives. The average detection time was 0.19 ± 0.33 s for ELM and 0.13 ± 0.32 s for MLP. The MLP classifier exhibited less time complexity in the recognition phase compared to ELM. However, the ELM classifier presented lower computational demands in the training phase. Results demonstrated that the proposed algorithm could potentially be adopted to control a powered orthosis.Item Meal Microstructure Characterization from Sensor-Based Food Intake Detection(Frontiers Media, 2017-07-17) Doulah, Abul; Farooq, Muhammad; Yang, Xin; Parton, Jason; McCrory, Megan A.; Higgins, Janine A.; Sazonov, Edward; University of Alabama Tuscaloosa; Boston University; University of Colorado System; University of Colorado Anschutz Medical Campus; University of Colorado DenverTo avoid the pitfalls of self-reported dietary intake, wearable sensors can be used. Many food ingestion sensors offer the ability to automatically detect food intake using time resolutions that range from 23 ms to 8 min. There is no defined standard time resolution to accurately measure ingestive behavior or a meal microstructure. This paper aims to estimate the time resolution needed to accurately represent the microstructure of meals such as duration of eating episode, the duration of actual ingestion, and number of eating events. Twelve participants wore the automatic ingestion monitor (AIM) and kept a standard diet diary to report their food intake in free-living conditions for 24 h. As a reference, participants were also asked to mark food intake with a push button sampled every 0.1 s. The duration of eating episodes, duration of ingestion, and number of eating events were computed from the food diary, AIM, and the push button resampled at different time resolutions (0.1-30s). ANOVA and multiple comparison tests showed that the duration of eating episodes estimated from the diary differed significantly from that estimated by the AIM and the push button (p-value <0.001). There were no significant differences in the number of eating events for push button resolutions of 0.1, 1, and 5 s, but there were significant differences in resolutions of 10-30s (p-value <0.05). The results suggest that the desired time resolution of sensor-based food intake detection should be <= 5 s to accurately detect meal microstructure. Furthermore, the AIM provides more accurate measurement of the eating episode duration than the diet diary.Item Statistical models for meal-level estimation of mass and energy intake using features derived from video observation and a chewing sensor(Nature Portfolio, 2019) Yang, Xin; Doulah, Abul; Farooq, Muhammad; Parton, Jason; McCrory, Megan A.; Higgins, Janine A.; Sazonov, Edward; University of Alabama Tuscaloosa; Boston University; University of Colorado Anschutz Medical Campus; University of Colorado DenverAccurate and objective assessment of energy intake remains an ongoing problem. We used features derived from annotated video observation and a chewing sensor to predict mass and energy intake during a meal without participant self-report. 30 participants each consumed 4 different meals in a laboratory setting and wore a chewing sensor while being videotaped. Subject-independent models were derived from bite, chew, and swallow features obtained from either video observation or information extracted from the chewing sensor. With multiple regression analysis, a forward selection procedure was used to choose the best model. The best estimates of meal mass and energy intake had (mean +/- standard deviation) absolute percentage errors of 25.2% +/- 18.9% and 30.1% +/- 33.8%, respectively, and mean +/- standard deviation estimation errors of -17.7 +/- 226.9 g and -6.1 +/- 273.8 kcal using features derived from both video observations and sensor data. Both video annotation and sensor-derived features may be utilized to objectively quantify energy intake.Item A Systematic Review of Technology-Driven Methodologies for Estimation of Energy Intake(IEEE, 2019) Doulah, Abul; Mccrory, Megan A.; Higgins, Janine A.; Sazonov, Edward; University of Alabama Tuscaloosa; Boston University; University of Colorado DenverAccurate measurement of energy intake (EI) is important for estimation of energy balance, and, correspondingly, body weight dynamics. Traditional measurements of EI rely on self-report, which may be inaccurate and underestimate EI. The imperfections in traditional methodologies such as 24-hour dietary recall, dietary record, and food frequency questionnaire stipulate development of technology-driven methods that rely on wearable sensors and imaging devices to achieve an objective and accurate assessment of EI. The aim of this research was to systematically review and examine peer-reviewed papers that cover the estimation of EI in humans, with the focus on emerging technology-driven methodologies. Five major electronic databases were searched for articles published from January 2005 to August 2017: Pubmed, Science Direct, IEEE Xplore, ACM library, and Google Scholar. Twenty-six eligible studies were retrieved that met the inclusion criteria. The review identified that while the current methods of estimating EI show promise, accurate estimation of EI in free-living individuals presents many challenges and opportunities. The most accurate result identified for EI (kcal) estimation had an average accuracy of 94%. However, collectively, the results were obtained from a limited number of food items (i.e., 19), small sample sizes (i.e., 45 meal images), and primarily controlled conditions. Therefore, new methods that accurately estimate EI over long time periods in free-living conditions are needed.Item Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment(MDPI, 2019) Farooq, Muhammad; Doulah, Abul; Parton, Jason; McCrory, Megan A.; Higgins, Janine A.; Sazonov, Edward; University of Alabama Tuscaloosa; Boston University; University of Colorado Anschutz Medical CampusVideo observations have been widely used for providing ground truth for wearable systems for monitoring food intake in controlled laboratory conditions; however, video observation requires participants be confined to a defined space. The purpose of this analysis was to test an alternative approach for establishing activity types and food intake bouts in a relatively unconstrained environment. The accuracy of a wearable system for assessing food intake was compared with that from video observation, and inter-rater reliability of annotation was also evaluated. Forty participants were enrolled. Multiple participants were simultaneously monitored in a 4-bedroom apartment using six cameras for three days each. Participants could leave the apartment overnight and for short periods of time during the day, during which time monitoring did not take place. A wearable system (Automatic Ingestion Monitor, AIM) was used to detect and monitor participants' food intake at a resolution of 30 s using a neural network classifier. Two different food intake detection models were tested, one trained on the data from an earlier study and the other on current study data using leave-one-out cross validation. Three trained human raters annotated the videos for major activities of daily living including eating, drinking, resting, walking, and talking. They further annotated individual bites and chewing bouts for each food intake bout. Results for inter-rater reliability showed that, for activity annotation, the raters achieved an average (+/- standard deviation (STD)) kappa value of 0.74 (+/- 0.02) and for food intake annotation the average kappa (Light's kappa) of 0.82 (+/- 0.04). Validity results showed that AIM food intake detection matched human video-annotated food intake with a kappa of 0.77 (+/- 0.10) and 0.78 (+/- 0.12) for activity annotation and for food intake bout annotation, respectively. Results of one-way ANOVA suggest that there are no statistically significant differences among the average eating duration estimated from raters' annotations and AIM predictions (p-value = 0.19). These results suggest that the AIM provides accuracy comparable to video observation and may be used to reliably detect food intake in multi-day observational studies.