Differences in reaction time on visual tasks with and without binocular disparity ques?
$begingroup$
Have there been any studies done that test the reaction time to vision processing tasks with and without the benefit of binocular disparities? I have been wondering how much depth information (such as the jutting out of the nose on a face) contributes to visual processing and object encoding in the brain - or if the visual scene is treated like a diorama with 2D images placed at various depths.
My assumption is that if the 3D aspect of the scene components are important then the reaction times would vary.
neuroscience vision experimental-neuroscience
$endgroup$
add a comment |
$begingroup$
Have there been any studies done that test the reaction time to vision processing tasks with and without the benefit of binocular disparities? I have been wondering how much depth information (such as the jutting out of the nose on a face) contributes to visual processing and object encoding in the brain - or if the visual scene is treated like a diorama with 2D images placed at various depths.
My assumption is that if the 3D aspect of the scene components are important then the reaction times would vary.
neuroscience vision experimental-neuroscience
$endgroup$
$begingroup$
Welcome! Interesting question. To make it more concrete, could you include some example vision processing tasks you know about already, and are thinking of you would want to test?
$endgroup$
– Steven Jeuris♦
Jan 5 at 17:11
1
$begingroup$
Thanks, well I suppose I was just thinking of some sort of random object recognition tasks - I'm coming to this from a programming perspective, have been studying deep learning networks all of which i'm aware of just use 2D images so I'm pondering whats to be gained by adding an extra depth channel to a ConvNet and whether that's conceivably whats all ready included in the biological case.
$endgroup$
– norlesh
Jan 5 at 17:24
$begingroup$
- the task would somehow time responses to displayed 3D items revealed behind flat glass vs displayed on a flat screen display perhaps.
$endgroup$
– norlesh
Jan 5 at 17:50
add a comment |
$begingroup$
Have there been any studies done that test the reaction time to vision processing tasks with and without the benefit of binocular disparities? I have been wondering how much depth information (such as the jutting out of the nose on a face) contributes to visual processing and object encoding in the brain - or if the visual scene is treated like a diorama with 2D images placed at various depths.
My assumption is that if the 3D aspect of the scene components are important then the reaction times would vary.
neuroscience vision experimental-neuroscience
$endgroup$
Have there been any studies done that test the reaction time to vision processing tasks with and without the benefit of binocular disparities? I have been wondering how much depth information (such as the jutting out of the nose on a face) contributes to visual processing and object encoding in the brain - or if the visual scene is treated like a diorama with 2D images placed at various depths.
My assumption is that if the 3D aspect of the scene components are important then the reaction times would vary.
neuroscience vision experimental-neuroscience
neuroscience vision experimental-neuroscience
asked Jan 5 at 16:46
norleshnorlesh
3437
3437
$begingroup$
Welcome! Interesting question. To make it more concrete, could you include some example vision processing tasks you know about already, and are thinking of you would want to test?
$endgroup$
– Steven Jeuris♦
Jan 5 at 17:11
1
$begingroup$
Thanks, well I suppose I was just thinking of some sort of random object recognition tasks - I'm coming to this from a programming perspective, have been studying deep learning networks all of which i'm aware of just use 2D images so I'm pondering whats to be gained by adding an extra depth channel to a ConvNet and whether that's conceivably whats all ready included in the biological case.
$endgroup$
– norlesh
Jan 5 at 17:24
$begingroup$
- the task would somehow time responses to displayed 3D items revealed behind flat glass vs displayed on a flat screen display perhaps.
$endgroup$
– norlesh
Jan 5 at 17:50
add a comment |
$begingroup$
Welcome! Interesting question. To make it more concrete, could you include some example vision processing tasks you know about already, and are thinking of you would want to test?
$endgroup$
– Steven Jeuris♦
Jan 5 at 17:11
1
$begingroup$
Thanks, well I suppose I was just thinking of some sort of random object recognition tasks - I'm coming to this from a programming perspective, have been studying deep learning networks all of which i'm aware of just use 2D images so I'm pondering whats to be gained by adding an extra depth channel to a ConvNet and whether that's conceivably whats all ready included in the biological case.
$endgroup$
– norlesh
Jan 5 at 17:24
$begingroup$
- the task would somehow time responses to displayed 3D items revealed behind flat glass vs displayed on a flat screen display perhaps.
$endgroup$
– norlesh
Jan 5 at 17:50
$begingroup$
Welcome! Interesting question. To make it more concrete, could you include some example vision processing tasks you know about already, and are thinking of you would want to test?
$endgroup$
– Steven Jeuris♦
Jan 5 at 17:11
$begingroup$
Welcome! Interesting question. To make it more concrete, could you include some example vision processing tasks you know about already, and are thinking of you would want to test?
$endgroup$
– Steven Jeuris♦
Jan 5 at 17:11
1
1
$begingroup$
Thanks, well I suppose I was just thinking of some sort of random object recognition tasks - I'm coming to this from a programming perspective, have been studying deep learning networks all of which i'm aware of just use 2D images so I'm pondering whats to be gained by adding an extra depth channel to a ConvNet and whether that's conceivably whats all ready included in the biological case.
$endgroup$
– norlesh
Jan 5 at 17:24
$begingroup$
Thanks, well I suppose I was just thinking of some sort of random object recognition tasks - I'm coming to this from a programming perspective, have been studying deep learning networks all of which i'm aware of just use 2D images so I'm pondering whats to be gained by adding an extra depth channel to a ConvNet and whether that's conceivably whats all ready included in the biological case.
$endgroup$
– norlesh
Jan 5 at 17:24
$begingroup$
- the task would somehow time responses to displayed 3D items revealed behind flat glass vs displayed on a flat screen display perhaps.
$endgroup$
– norlesh
Jan 5 at 17:50
$begingroup$
- the task would somehow time responses to displayed 3D items revealed behind flat glass vs displayed on a flat screen display perhaps.
$endgroup$
– norlesh
Jan 5 at 17:50
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
According to these three published studies I found below - reaction time is slower when depth disparity information is withheld, as to whether or not that extra information is actually encoded in the brain is still an open question.
This first paper by Young Lim Lee; Jeffrey A. Saunders used left and right eye rendered images of novel 3D shapes with and without binocular cues to time test subjects in discriminating whether two images were rotational views of the same object. With a set of three experiments they concluded in all cases that the stereo information improved reaction times.
Stereo improves 3D shape discrimination even when rich monocular shape cues are available
The second paper by Darren Burke, Jessica Taubert, Talia Higman had test subjects view pairs of photographed human faces briefly through a stereoscope with and without stereoscopic information and were required to identify if it was the same or a different individual. Results were that the subjects had faster reaction times and lower error rates when stereoscopic information was present.
Are face representations viewpoint dependent? A stereo advantage for generalising across different views of faces
And the third paper by Oliver ZJ, Cristino F, Roberts MV, Pegna AJ, Leek EC the test subjects were split into a mono and stereo groups before being required to memomrize a set of 3D objects and then determine amongst a series of test objects whether or not it was from the set. The test was conducted using a 3D stereo monitor and glasses with the images displayed either with complementary or identical images to either eye depending on the group while 128 channel ERP (event-related-potential) traces were recorded. Results showed higher accuracy for the stereo group and while there was no statistics given for reaction times, the paper concludes with an analysis and discussion regarding the differences in ERP data collected from the two groups.
Stereo Viewing Modulates Three-Dimensional Shape Processing During Object Recognition: A High-Density ERP Study
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "391"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fpsychology.stackexchange.com%2fquestions%2f21424%2fdifferences-in-reaction-time-on-visual-tasks-with-and-without-binocular-disparit%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
According to these three published studies I found below - reaction time is slower when depth disparity information is withheld, as to whether or not that extra information is actually encoded in the brain is still an open question.
This first paper by Young Lim Lee; Jeffrey A. Saunders used left and right eye rendered images of novel 3D shapes with and without binocular cues to time test subjects in discriminating whether two images were rotational views of the same object. With a set of three experiments they concluded in all cases that the stereo information improved reaction times.
Stereo improves 3D shape discrimination even when rich monocular shape cues are available
The second paper by Darren Burke, Jessica Taubert, Talia Higman had test subjects view pairs of photographed human faces briefly through a stereoscope with and without stereoscopic information and were required to identify if it was the same or a different individual. Results were that the subjects had faster reaction times and lower error rates when stereoscopic information was present.
Are face representations viewpoint dependent? A stereo advantage for generalising across different views of faces
And the third paper by Oliver ZJ, Cristino F, Roberts MV, Pegna AJ, Leek EC the test subjects were split into a mono and stereo groups before being required to memomrize a set of 3D objects and then determine amongst a series of test objects whether or not it was from the set. The test was conducted using a 3D stereo monitor and glasses with the images displayed either with complementary or identical images to either eye depending on the group while 128 channel ERP (event-related-potential) traces were recorded. Results showed higher accuracy for the stereo group and while there was no statistics given for reaction times, the paper concludes with an analysis and discussion regarding the differences in ERP data collected from the two groups.
Stereo Viewing Modulates Three-Dimensional Shape Processing During Object Recognition: A High-Density ERP Study
$endgroup$
add a comment |
$begingroup$
According to these three published studies I found below - reaction time is slower when depth disparity information is withheld, as to whether or not that extra information is actually encoded in the brain is still an open question.
This first paper by Young Lim Lee; Jeffrey A. Saunders used left and right eye rendered images of novel 3D shapes with and without binocular cues to time test subjects in discriminating whether two images were rotational views of the same object. With a set of three experiments they concluded in all cases that the stereo information improved reaction times.
Stereo improves 3D shape discrimination even when rich monocular shape cues are available
The second paper by Darren Burke, Jessica Taubert, Talia Higman had test subjects view pairs of photographed human faces briefly through a stereoscope with and without stereoscopic information and were required to identify if it was the same or a different individual. Results were that the subjects had faster reaction times and lower error rates when stereoscopic information was present.
Are face representations viewpoint dependent? A stereo advantage for generalising across different views of faces
And the third paper by Oliver ZJ, Cristino F, Roberts MV, Pegna AJ, Leek EC the test subjects were split into a mono and stereo groups before being required to memomrize a set of 3D objects and then determine amongst a series of test objects whether or not it was from the set. The test was conducted using a 3D stereo monitor and glasses with the images displayed either with complementary or identical images to either eye depending on the group while 128 channel ERP (event-related-potential) traces were recorded. Results showed higher accuracy for the stereo group and while there was no statistics given for reaction times, the paper concludes with an analysis and discussion regarding the differences in ERP data collected from the two groups.
Stereo Viewing Modulates Three-Dimensional Shape Processing During Object Recognition: A High-Density ERP Study
$endgroup$
add a comment |
$begingroup$
According to these three published studies I found below - reaction time is slower when depth disparity information is withheld, as to whether or not that extra information is actually encoded in the brain is still an open question.
This first paper by Young Lim Lee; Jeffrey A. Saunders used left and right eye rendered images of novel 3D shapes with and without binocular cues to time test subjects in discriminating whether two images were rotational views of the same object. With a set of three experiments they concluded in all cases that the stereo information improved reaction times.
Stereo improves 3D shape discrimination even when rich monocular shape cues are available
The second paper by Darren Burke, Jessica Taubert, Talia Higman had test subjects view pairs of photographed human faces briefly through a stereoscope with and without stereoscopic information and were required to identify if it was the same or a different individual. Results were that the subjects had faster reaction times and lower error rates when stereoscopic information was present.
Are face representations viewpoint dependent? A stereo advantage for generalising across different views of faces
And the third paper by Oliver ZJ, Cristino F, Roberts MV, Pegna AJ, Leek EC the test subjects were split into a mono and stereo groups before being required to memomrize a set of 3D objects and then determine amongst a series of test objects whether or not it was from the set. The test was conducted using a 3D stereo monitor and glasses with the images displayed either with complementary or identical images to either eye depending on the group while 128 channel ERP (event-related-potential) traces were recorded. Results showed higher accuracy for the stereo group and while there was no statistics given for reaction times, the paper concludes with an analysis and discussion regarding the differences in ERP data collected from the two groups.
Stereo Viewing Modulates Three-Dimensional Shape Processing During Object Recognition: A High-Density ERP Study
$endgroup$
According to these three published studies I found below - reaction time is slower when depth disparity information is withheld, as to whether or not that extra information is actually encoded in the brain is still an open question.
This first paper by Young Lim Lee; Jeffrey A. Saunders used left and right eye rendered images of novel 3D shapes with and without binocular cues to time test subjects in discriminating whether two images were rotational views of the same object. With a set of three experiments they concluded in all cases that the stereo information improved reaction times.
Stereo improves 3D shape discrimination even when rich monocular shape cues are available
The second paper by Darren Burke, Jessica Taubert, Talia Higman had test subjects view pairs of photographed human faces briefly through a stereoscope with and without stereoscopic information and were required to identify if it was the same or a different individual. Results were that the subjects had faster reaction times and lower error rates when stereoscopic information was present.
Are face representations viewpoint dependent? A stereo advantage for generalising across different views of faces
And the third paper by Oliver ZJ, Cristino F, Roberts MV, Pegna AJ, Leek EC the test subjects were split into a mono and stereo groups before being required to memomrize a set of 3D objects and then determine amongst a series of test objects whether or not it was from the set. The test was conducted using a 3D stereo monitor and glasses with the images displayed either with complementary or identical images to either eye depending on the group while 128 channel ERP (event-related-potential) traces were recorded. Results showed higher accuracy for the stereo group and while there was no statistics given for reaction times, the paper concludes with an analysis and discussion regarding the differences in ERP data collected from the two groups.
Stereo Viewing Modulates Three-Dimensional Shape Processing During Object Recognition: A High-Density ERP Study
edited Jan 5 at 21:43
Steven Jeuris♦
2,16842150
2,16842150
answered Jan 5 at 20:01
norleshnorlesh
3437
3437
add a comment |
add a comment |
Thanks for contributing an answer to Psychology & Neuroscience Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fpsychology.stackexchange.com%2fquestions%2f21424%2fdifferences-in-reaction-time-on-visual-tasks-with-and-without-binocular-disparit%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Welcome! Interesting question. To make it more concrete, could you include some example vision processing tasks you know about already, and are thinking of you would want to test?
$endgroup$
– Steven Jeuris♦
Jan 5 at 17:11
1
$begingroup$
Thanks, well I suppose I was just thinking of some sort of random object recognition tasks - I'm coming to this from a programming perspective, have been studying deep learning networks all of which i'm aware of just use 2D images so I'm pondering whats to be gained by adding an extra depth channel to a ConvNet and whether that's conceivably whats all ready included in the biological case.
$endgroup$
– norlesh
Jan 5 at 17:24
$begingroup$
- the task would somehow time responses to displayed 3D items revealed behind flat glass vs displayed on a flat screen display perhaps.
$endgroup$
– norlesh
Jan 5 at 17:50