Home > . > DIVA_AuditoryCortexCategorical.m

DIVA_AuditoryCortexCategorical

PURPOSE ^

DIVA_AuditoryCortexCategorical Categorical perception model

SYNOPSIS ^

function out=DIVA_AuditoryCortexCategorical(varargin)

DESCRIPTION ^

 DIVA_AuditoryCortexCategorical Categorical perception model

 This model initiates the production of phonemic segments,
 sending an index to the desired phonemic target to MotorCortex,
 and the expected auditory and somatosensory consequences
 of the phonemic target to AuditoryCortex and SomatosensoryCortex
 respectively via the SoundMap.  There is a one-to-one relationship
 between AuditoryCortexCategorical 'target's and SoundMap

 This model also recognizes sound segments (from AuditoryCortex)
 by comparing them to stored phonemic representations (if not
 match is found the model creates a new phonemic representation
 for the present sound). Phonemic segments are defined by
 blocks of sound surrounded by silence.

 DIVA_AuditoryCortexCategorical('init' [,SessionName]);          Initializes the module
 DIVA_AuditoryCortexCategorical('save' [,SessionName] );         Saves state
 DIVA_AuditoryCortexCategorical('exit');                         Exits the module (without saving)
 DIVA_AuditoryCortexCategorical(PropertyName [,PropertyValue] )  Reads and writes internal model
                                                                 properties

 DIVA_AuditoryCortexCategorical('label',PhonemeName);
 Informs the AuditoryCortexCategorical model that the next sound segment
 should be identified as the phonemic segment 'PhonemeName'.

 DIVA_AuditoryCortexCategorical('sound',s);
 Sends the sound signal s (AuditoryCortex representation) to the
 AuditoryCortexCategorical model. This is sequentially stored as the current sound
 segment until a period of silence (all zeros) is found.
 Then, if a pre-existing phonemic label has been associated
 with this segment its phonemic representation is updated, otherwise
 the model compares the incoming sound segment with the stored
 phonemic representations and displays the most likely match.

 DIVA_AuditoryCortexCategorical('target',PhonemeName);
 Initiates production of phonemic target 'PhonemeName'. The
 model sends sequentially indexes to a temporal representation
 of the phonemic target to the SoundMap area.

 Current DIVA_AuditoryCortexCategorical properties are: (* requires re-initialization)
     AuditoryTargets         : structure containing the learned auditory targets
                               (read-only)
     SilenceThresholdEnergy  : Threshold for sound segments limits
                               (Energy must be larger than 'SilenceThresholdEnergy')
                               [Not necessary since formants always > 0]
     SilenceThresholdTime    : for at least 'SilenceThresholdTime' seconds)
                               Note: Both SilenceThresholdEnergy/Time are important for parsing
                                     continuous waveform input (i.e. microphone) and utilized in
                                     the spectral representation version of the model, not in
                                     (formant) version
     delayToSoundMap         : delay (in seconds) for signals to SoundMap
     ntimeponts              : number of interpolated time-points in the stored representations
                               Auditory Targets are time-normalized by representing areas of
                               higher auditory trajectory velocity than more static ones.  The
                               number of normalized time-points is specified by ntimepoints.
                               The target velocity is stored within the target specification for
                               use in reconstruction of the original time scale as well as
                               velocity scaling the stored representation

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SOURCE CODE ^

0001 function out=DIVA_AuditoryCortexCategorical(varargin)
0002 % DIVA_AuditoryCortexCategorical Categorical perception model
0003 %
0004 % This model initiates the production of phonemic segments,
0005 % sending an index to the desired phonemic target to MotorCortex,
0006 % and the expected auditory and somatosensory consequences
0007 % of the phonemic target to AuditoryCortex and SomatosensoryCortex
0008 % respectively via the SoundMap.  There is a one-to-one relationship
0009 % between AuditoryCortexCategorical 'target's and SoundMap
0010 %
0011 % This model also recognizes sound segments (from AuditoryCortex)
0012 % by comparing them to stored phonemic representations (if not
0013 % match is found the model creates a new phonemic representation
0014 % for the present sound). Phonemic segments are defined by
0015 % blocks of sound surrounded by silence.
0016 %
0017 % DIVA_AuditoryCortexCategorical('init' [,SessionName]);          Initializes the module
0018 % DIVA_AuditoryCortexCategorical('save' [,SessionName] );         Saves state
0019 % DIVA_AuditoryCortexCategorical('exit');                         Exits the module (without saving)
0020 % DIVA_AuditoryCortexCategorical(PropertyName [,PropertyValue] )  Reads and writes internal model
0021 %                                                                 properties
0022 %
0023 % DIVA_AuditoryCortexCategorical('label',PhonemeName);
0024 % Informs the AuditoryCortexCategorical model that the next sound segment
0025 % should be identified as the phonemic segment 'PhonemeName'.
0026 %
0027 % DIVA_AuditoryCortexCategorical('sound',s);
0028 % Sends the sound signal s (AuditoryCortex representation) to the
0029 % AuditoryCortexCategorical model. This is sequentially stored as the current sound
0030 % segment until a period of silence (all zeros) is found.
0031 % Then, if a pre-existing phonemic label has been associated
0032 % with this segment its phonemic representation is updated, otherwise
0033 % the model compares the incoming sound segment with the stored
0034 % phonemic representations and displays the most likely match.
0035 %
0036 % DIVA_AuditoryCortexCategorical('target',PhonemeName);
0037 % Initiates production of phonemic target 'PhonemeName'. The
0038 % model sends sequentially indexes to a temporal representation
0039 % of the phonemic target to the SoundMap area.
0040 %
0041 % Current DIVA_AuditoryCortexCategorical properties are: (* requires re-initialization)
0042 %     AuditoryTargets         : structure containing the learned auditory targets
0043 %                               (read-only)
0044 %     SilenceThresholdEnergy  : Threshold for sound segments limits
0045 %                               (Energy must be larger than 'SilenceThresholdEnergy')
0046 %                               [Not necessary since formants always > 0]
0047 %     SilenceThresholdTime    : for at least 'SilenceThresholdTime' seconds)
0048 %                               Note: Both SilenceThresholdEnergy/Time are important for parsing
0049 %                                     continuous waveform input (i.e. microphone) and utilized in
0050 %                                     the spectral representation version of the model, not in
0051 %                                     (formant) version
0052 %     delayToSoundMap         : delay (in seconds) for signals to SoundMap
0053 %     ntimeponts              : number of interpolated time-points in the stored representations
0054 %                               Auditory Targets are time-normalized by representing areas of
0055 %                               higher auditory trajectory velocity than more static ones.  The
0056 %                               number of normalized time-points is specified by ntimepoints.
0057 %                               The target velocity is stored within the target specification for
0058 %                               use in reconstruction of the original time scale as well as
0059 %                               velocity scaling the stored representation
0060 %
0061 
0062 % Note: This model updates the target representation of the
0063 %       AuditoryCortex area to match the AuditoryTargets
0064 %       stored here (non-modular design, to be changed later...)
0065 
0066 % 2006-10-18: (JSB) can produce learned sounds in sequence **no
0067 % coarticulation.  See 'target' functionality
0068 
0069 % Dependencies: equalstep.m
0070 
0071 % TODO % Allow user variation of target regions, global and local
0072 
0073 out=[];
0074 global DIVA_AuditoryCortexCategorical_data
0075 
0076 for indexargin=1:2:nargin,
0077   switch(varargin{indexargin}),
0078    case 'init',
0079     SessionFolder=strcat(DIVA('SessionFolder'),filesep);
0080     if nargin<indexargin+1 || isempty(varargin{indexargin+1}),
0081       initfile='';
0082     else,
0083       initfile=[SessionFolder,'Session_',varargin{indexargin+1},filesep,mfilename,'.mat'];
0084     end
0085     if isempty(initfile) || isempty(dir(initfile)),
0086       disp([mfilename, ' : Defining new session...']);
0087       % INITIALIZES INTERNAL MODULE PARAMETERS %%%%%%%%%
0088       DIVA_AuditoryCortexCategorical_data.params=struct(...
0089           'ntimepoints',64,...    % number of interpolated time-points in the stored representations
0090           'delayToSoundMap',.005,...
0091           'SilenceThresholdEnergy',.05,... % Threshold for sound segments (Energy larger than
0092           'SilenceThresholdTime',.050,...  % 'SilenceThresholdEnergy' for at least 'SilenceThresholdTime' seconds)
0093           'match',-1,...                   % Currently selected auditory target
0094           'AuditoryTargets',...       % Auditory Target Specification
0095           repmat(struct(...
0096               'mean',[],...           % Mean signal (formants)
0097               'var',[],...            % Signal Variance, used for target regions
0098               'velocity',[],...       % Signal velocity, used for normalization
0099               'label',[],...          % Phoneme/syllable/sound identification
0100               'nsamples',[],...          % Number of samples, used to adjust variance
0101               'ntrained',[]),[1,0])); % Number of training trials
0102       % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0103 
0104     else,
0105       data=load(initfile,'-mat');
0106       DIVA_AuditoryCortexCategorical_data.params=data.params;
0107     end
0108     % INITIALIZES OTHER (TEMPORAL) PARAMETERS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0109     DIVA_AuditoryCortexCategorical_data.params.TimeStep = DIVA('TimeStep');
0110     % nbinsenery used in Spectral implementation, removed by JSB for formant version
0111     % DIVA_AuditoryCortexCategorical_data.params.nbinsenergy = ...
0112     %     DIVA('AuditoryCortex','nbinsenergy');
0113     DIVA_AuditoryCortexCategorical_data.params.current=struct(...
0114         'SoundSegment',[],...
0115         'PhonemeLabel',[]);
0116     % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0117 
0118     % LIST OF MODULE CHANNELS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0119     out={'sound'};
0120     % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0121 
0122     %********************* END INIT ****************************%
0123 
0124    case 'save',
0125     SessionFolder=strcat(DIVA('SessionFolder'),filesep);
0126     if nargin<indexargin+1,
0127       initfile=[SessionFolder,'Session_','default',filesep,mfilename,'.mat'];
0128     else,
0129       initfile=[SessionFolder,'Session_',varargin{indexargin+1},filesep,mfilename,'.mat'];
0130     end
0131     params=DIVA_AuditoryCortexCategorical_data.params;
0132     save(initfile,'params');
0133     %********************* END SAVE ****************************%
0134 
0135    case 'exit',
0136     clear DIVA_AuditoryCortexCategorical_data;
0137     % CODE FOR CLEARING MEMORY OR THE LIKE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0138     % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0139 
0140     %********************* END EXIT ****************************%
0141    case 'disp', % Display Module Parameters
0142     disp(DIVA_AuditoryCortexCategorical_data.params);
0143     out=fieldnames(DIVA_AuditoryCortexCategorical_data.params);
0144     %********************* END DISP ****************************%
0145 
0146     %-----------------------------------------------------------%
0147 
0148    case 'sound', % Updates current sound segment info
0149     idx = find(~any(isnan(varargin{indexargin+1})));
0150     DIVA_AuditoryCortexCategorical_data.params.current.SoundSegment(:,end+(1:length(idx))) = ...
0151         varargin{indexargin+1}(:,idx);
0152 
0153     % Sound segment complete: recognition and updating of phonemic targets
0154     if isempty(find(idx==size(varargin{indexargin+1},2))) && ...
0155           ~isempty(DIVA_AuditoryCortexCategorical_data.params.current.SoundSegment),
0156       disp('DIVA_AuditoryCortexCategorical: Received sound segment');
0157 
0158 
0159       % Time normalize signal, SoundSegment = normalized signal, vel = transfer function
0160       e1 = DIVA_AuditoryCortexCategorical_data.params.current.SoundSegment;
0161       e1 = mean(e1./repmat(max(e1,[],2),1,size(e1,2)),1);
0162       n = ceil(DIVA_AuditoryCortexCategorical_data.params.SilenceThresholdTime/...
0163                DIVA_AuditoryCortexCategorical_data.params.TimeStep);
0164 
0165       % Find valid indicies of input signal (i.e. area between silences, assumed to be entire
0166       % signal in formant version)
0167 
0168       if(length(e1)==1)
0169         e2=1;
0170       else
0171         e2=find(min(sample_window(e1',n,n-1),[],1));
0172       end
0173       if ~length(e2),
0174         if(length(e1)>0),
0175           disp('DIVA_AuditoryCortexCategorical: Sound segment contains only silence');
0176         end
0177       else,
0178         if length(e2)>1, %dynamic target
0179           lims = max(1,min(length(e1),[e2(1)-n,e2(end)+2*n-1]));
0180           % normalization (SoundSegment is interpolated to a fixed number of timepoints)
0181           % by assigning more timepoints to parts of the SoundSegment with higher velocity
0182           [SoundSegment,time] = ...
0183               equalstep(DIVA_AuditoryCortexCategorical_data.params.current.SoundSegment(:,lims(1):lims(2)),...
0184                         DIVA_AuditoryCortexCategorical_data.params.ntimepoints,'fixedlength');
0185           time = time*DIVA_AuditoryCortexCategorical_data.params.TimeStep;
0186           vel  = 1./diff(time);
0187         else, % static target
0188           SoundSegment = ...
0189               DIVA_AuditoryCortexCategorical_data.params.current.SoundSegment(:,ones(1,DIVA_AuditoryCortexCategorical_data.params.ntimepoints));
0190           vel = 0;
0191         end
0192 
0193         % JSB: Recognition is OK, simple MSE calculation alright due to simple, formant basis
0194         % recognition
0195         matchlevels = ...
0196             nan + zeros(1,length(DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets));
0197 
0198         % JSB: For now, no actual recognition of input signal, just assume user is correct
0199         %      when inputting 'label'
0200         %      Need to implement better recognition (formant based) for use in calculating
0201         %      appropriate signal variances (and means) rather than hardcode [400,2500,10000] Hz
0202         %      variance
0203 
0204         %  for n1=1:length(DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets),
0205         %    % Determine MSE between current sound and stored targets
0206         %    matchlevels(n1) = ...
0207         %        mean(mean(...
0208         %            (DIVA_AuditoryCortexCategorical_data.params.current.SoundSegment-...
0209         %             DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(n1).mean).^2,1),2)
0210         %  end
0211         %  [matchlevel,winner]=min(matchlevels)
0212         %  if ~isempty(winner),
0213         %    disp(['DIVA_AuditoryCortexCategorical: Best match to sound segment ',...
0214         %          DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(winner).label]);
0215         %  end
0216 
0217         % JSB: Store/update current phoneme representation
0218 
0219         if ~isempty(DIVA_AuditoryCortexCategorical_data.params.current.PhonemeLabel),
0220           % update phoneme representation
0221           match = strmatch(DIVA_AuditoryCortexCategorical_data.params.current.PhonemeLabel,...
0222                            {DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(:).label},...
0223                            'exact');
0224 
0225           %% JSB: New Phonemic Target -- OK
0226           if isempty(match), % new phonemic target
0227             match = length(DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets)+1;
0228             DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).mean = SoundSegment;
0229             % Hardcode variance for now
0230             DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).var = ...
0231                 repmat([400;2500;10000],1,size(SoundSegment,2));
0232             DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).velocity=vel;
0233             DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).label = ...
0234                 DIVA_AuditoryCortexCategorical_data.params.current.PhonemeLabel;
0235             DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).nsamples = 1;
0236             DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).ntrained = 0;
0237 
0238             % updates AuditoryCortex targets too
0239             temp = DIVA('AuditoryCortex','WeightsFromTargets');
0240             temp(1:2*size(DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).mean,1),...
0241                  DIVA_AuditoryCortexCategorical_data.params.ntimepoints*(match-1)+...
0242                  (1:DIVA_AuditoryCortexCategorical_data.params.ntimepoints)) = ...
0243                 [DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).mean - ...
0244                  sqrt(DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).var);...
0245                  DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).mean + ...
0246                  sqrt(DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).var)];
0247 
0248             DIVA_AuditoryCortexCategorical_data.params.match=match-1;
0249 
0250             % Copy auditory target representation to AuditoryCortex
0251             DIVA('AuditoryCortex','WeightsFromTargets',temp);
0252             disp(['DIVA_AuditoryCortexCategorical: Created new phonemic target: ',...
0253                   DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).label]);
0254 
0255           else, % existing phonemic target
0256                 % Update Mean target value
0257             DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).mean = ...
0258                 (DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).nsamples*...
0259                  DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).mean+SoundSegment)/...
0260                 (DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).nsamples+1);
0261 
0262             % Update Variance
0263             DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).var = ...
0264                 (DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).nsamples*...
0265                  DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).var+...
0266                  repmat([400;2500;10000],1,size(SoundSegment,2)))/...
0267                 (DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).nsamples+1);
0268 
0269             % Update Velocity
0270             DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).velocity = ...
0271                 (DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).nsamples*...
0272                  DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).velocity+vel)/...
0273                 (DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).nsamples+1);
0274 
0275             DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).nsamples = ...
0276                 DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).nsamples+1;
0277 
0278             % updates AuditoryCortex targets too
0279             temp = DIVA('AuditoryCortex','WeightsFromTargets');
0280 
0281             temp(1:2*size(DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).mean,1),...
0282                  DIVA_AuditoryCortexCategorical_data.params.ntimepoints*(match-1)+...
0283                  (1:DIVA_AuditoryCortexCategorical_data.params.ntimepoints)) = ...
0284                 [DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).mean - ...
0285                  sqrt(DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).var);...
0286                  DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).mean + ...
0287                  sqrt(DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).var)];
0288 
0289             DIVA_AuditoryCortexCategorical_data.params.match=match-1;
0290             % Copy auditory target representation to AuditoryCortex
0291             DIVA('AuditoryCortex','WeightsFromTargets',temp);
0292             disp(['DIVA_AuditoryCortexCategorical: Updated phonemic target ',...
0293                   DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).label]);
0294           end
0295 
0296         else
0297           disp('Phoneme Label not specified');
0298         end
0299       end
0300       DIVA_AuditoryCortexCategorical_data.params.current.SoundSegment=[];
0301       DIVA_AuditoryCortexCategorical_data.params.current.PhonemeLabel=[];
0302     end
0303 
0304     if ~nargout, % indicate activity in ModelStatePlot
0305       DIVA('ModelStatePlot','AuditoryCortexCategorical','sound');
0306     end
0307 
0308     %*********************** END SOUND ***************************%
0309 
0310    case 'label',
0311     DIVA_AuditoryCortexCategorical_data.params.current.PhonemeLabel=varargin{indexargin+1};
0312     %*********************** END LABEL ***************************%
0313 
0314    case 'target',% Sends target commands to SoundMap
0315 
0316     % Original--JSB 16-10-2006
0317     %  if iscell(varargin{indexargin+1}),
0318     %    targetlabel=varargin{indexargin+1}{1};
0319     %  else,
0320     %    targetlabel=varargin{indexargin+1};
0321     %    vel = 1;
0322     %  end
0323     %  if iscell(varargin{indexargin+1}) & length(varargin{indexargin+1})>1,
0324     %    vel=varargin{indexargin+1}{2};
0325     %  end % velocity
0326 
0327     % New for GoDIVA interface
0328     out=[];
0329     if(~iscell(varargin{indexargin+1}))
0330       tempQueue{1}=varargin{indexargin+1};
0331     else
0332       tempQueue=varargin{indexargin+1};
0333     end
0334     vel=1;
0335     m=1;
0336     delay(1)=0;
0337     for n=1:length(tempQueue)
0338       if(ischar(tempQueue{n}))
0339         targetQueue{m}=tempQueue{n};
0340         m=m+1;
0341         delay(m)=0;
0342       else
0343         delay(m)=delay(m)+tempQueue{n};
0344       end
0345     end
0346 
0347     for n=1:length(targetQueue)
0348       if(ischar(targetQueue{n}))
0349         targetlabel=targetQueue{n};
0350       end
0351       
0352       if(n>1)
0353         delay(n)=delay(n)+length(out)*DIVA('TimeStep');
0354       end
0355       match = strmatch(targetlabel,...
0356                        {DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(:).label},...
0357                        'exact');
0358       % Set current auditory target
0359       DIVA_AuditoryCortexCategorical_data.params.match=match-1;
0360       if isempty(match),
0361         warning('DIVA_AuditoryCortexCategorical: Non-existing target'); %#ok<WNTAG>
0362       end
0363 
0364       % Velocity scaling, hardcoded to normal velocity (see vel = ... above)
0365       velthis = vel*DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).velocity;
0366 
0367       if ~velthis, %#ok<BDLGI> % static target
0368         targetindexthis = DIVA_AuditoryCortexCategorical_data.params.ntimepoints/2*...
0369             ones(1,ceil(vel/DIVA_AuditoryCortexCategorical_data.params.TimeStep));
0370         idx=(match-1)*DIVA_AuditoryCortexCategorical_data.params.ntimepoints + targetindexthis;
0371         % target indexes incorporate the phonemic target 'match'
0372       else,        % dynamic target
0373         [nill,targetindexthis] = equalstep(cumsum([0,1./velthis]),...
0374                                            DIVA_AuditoryCortexCategorical_data.params.TimeStep);
0375         % Target indexes
0376         idx=(match-1)*DIVA_AuditoryCortexCategorical_data.params.ntimepoints + targetindexthis;
0377         % target indexes incorporate the phonemic target 'match'
0378       end
0379       out=idx;
0380       if ~nargout
0381         match=DIVA_AuditoryCortexCategorical_data.params.match+1;
0382         DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).ntrained=...
0383             DIVA_AuditoryCortexCategorical_data.params.AuditoryTargets(match).ntrained+1;
0384         DIVA('SoundMap','target',out,...
0385              sum(delay(1:n))+DIVA_AuditoryCortexCategorical_data.params.delayToSoundMap);
0386         DIVA('ModelStatePlot','AuditoryCortexCategorical','target');
0387       end
0388     end
0389     %*************************** END TARGET ***************************%
0390 
0391     % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0392 
0393    otherwise, % Handle read/write parameters
0394     if isfield(DIVA_AuditoryCortexCategorical_data.params,varargin{indexargin}),
0395       if indexargin==nargin,
0396         out=DIVA_AuditoryCortexCategorical_data.params.(varargin{indexargin});
0397       else,
0398         DIVA_AuditoryCortexCategorical_data.params.(varargin{indexargin})=varargin{indexargin+1};
0399       end
0400     else,
0401       warning('DIVA_AuditoryCortexCategorical: wrong argument');
0402     end
0403   end
0404 end
0405

Generated on Tue 27-Mar-2007 12:06:24 by m2html © 2003