[Socrates] FW: CUNY NLP seminar this Friday: Lei Chen (ETS) on essay scoring, speech, and multimodal

Eva Fernandez Eva.Fernandez at qc.cuny.edu
Wed Apr 9 19:20:25 EDT 2014


Greetings:

The talk advertised below might be of interest to some of you:  presentation by Lei Chen, research scientist at ETS, speaking on automated scoring of essays and speeches.  Friday, 4/11, at 2:15 at the CUNY Graduate Center.

Best wishes,
Eva


Eva M. Fernández
Assistant Vice Provost 
Queens College, City University of New York
http://people.qc.cuny.edu/faculty/efernandez



-----Original Message-----
From: Bissoondial, Nishi [mailto:NBissoondial at gc.cuny.edu] 
Sent: Wednesday, April 09, 2014 2:32 PM
Subject: FW: CUNY NLP seminar this Friday: Lei Chen (ETS) on essay scoring, speech, and multimodal

See below.

-----Original Message-----
From: Liang Huang [mailto:liang.huang.sh at gmail.com] Dear All,

I'm very happy to announce our second talk this semester, featuring Dr. Lei Chen from ETS. Lei is a research scientist at ETS working on speech and NLP, and he will be talking about automated scoring of essays, speeches, and videos (e.g., public speaking).

Date/Time/Place: Friday 4/11, 2:15pm, Graduate Center (365 Fifth Ave), Science Center (Rm. 4102).

Lei will also give a brief overview of ETS's NLP/speech research, as well as ETS internships. If you're interested in ETS, definitely come join this talk.

See you on Friday,
Liang

Automated Scoring: from Essays to Speeches and finally to Videos

More assessments have been used recently to track students' skill levels and to support their learning processes. Accordingly, scoring such rapidly increasing assessments in a timely and cost efficient way becomes an important challenge. A solution is provided by Automated assessment (AA), a technique that uses natural language processing (NLP), speech processing, and machine learning, to simulate human raters' behaviors in order to automatically rate students' essays or speech responses. 

In this talk, I will first provide a brief overview of the research efforts carried out in the AA area by the ETS NLP & Speech group. Then, focusing on speech scoring, I will introduce my previous research on AA's two major areas: (a) finding useful features and (b) building accurate machine learning models. With respect to the former, I will talk about the use of the acoustic features widely investigated in Phonetics, such as vowel space. With respect to the latter, I will talk about using feature bagging in order to achieve more robust and accurate scoring models. Finally, I will present my recent work of using multimodal signal processing technology - such as body tracking, for example -to extend the scoring capability to nonverbal communication.

Bio

Lei Chen received his Ph.D. in Electrical and Computer Engineering from Purdue University. He worked as an intern in the Palo Alto Research Center (PARC) during the summer of 2007, and he is currently a research scientist in the R&D division at Educational Testing Service (ETS) in Princeton, NJ. Before joining ETS, his research focused on using non-verbal communication cues, such as gestures and eye gazes, to support language processing. He has been involved in the NSF KDI project and ARDA VACE project to investigate multimodal signals, including speech, gestures, and eye gazes, used in human-to-human conversations. In the 2009 International Conference of Multimodal Interface (ICMI), he won the Outstanding Paper Award sponsored by Google. At ETS, his research focuses on the automated assessment of spoken language by using speech recognition, natural language processing, and machine learning technologies. Since 2013, he has been working on multimodal signal processing technology for assessing video-based performance tests in areas such as public-speaking.


More information about the Socrates mailing list