• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Python types.complex_same_precision_as函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中pycbc.types.complex_same_precision_as函数的典型用法代码示例。如果您正苦于以下问题:Python complex_same_precision_as函数的具体用法?Python complex_same_precision_as怎么用?Python complex_same_precision_as使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了complex_same_precision_as函数的17个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: make_padded_frequency_series

def make_padded_frequency_series(vec,filter_N=None):
    """Pad a TimeSeries with a length of zeros greater than its length, such
    that the total length is the closest power of 2. This prevents the effects 
    of wraparound.
    """
    if filter_N is None:
        power = ceil(log(len(vec),2))+1
        N = 2 ** power
    else:
        N = filter_N
    n = N/2+1    
    
   
    if isinstance(vec,FrequencySeries):
        vectilde = FrequencySeries(zeros(n, dtype=complex_same_precision_as(vec)),
                                   delta_f=1.0,copy=False)
	if len(vectilde) < len(vec):
	    cplen = len(vectilde)
        else:
            cplen = len(vec)
        vectilde[0:cplen] = vec[0:cplen]  
        delta_f = vec.delta_f
    
        
    if isinstance(vec,TimeSeries):  
        vec_pad = TimeSeries(zeros(N),delta_t=vec.delta_t,
                         dtype=real_same_precision_as(vec))
        vec_pad[0:len(vec)] = vec   
        delta_f = 1.0/(vec.delta_t*N)
        vectilde = FrequencySeries(zeros(n),delta_f=1.0, 
                               dtype=complex_same_precision_as(vec))
        fft(vec_pad,vectilde)
        
    vectilde = FrequencySeries(vectilde * DYN_RANGE_FAC,delta_f=delta_f,dtype=complex64)
    return vectilde
开发者ID:AbhayMK,项目名称:pycbc,代码行数:35,代码来源:banksim.py


示例2: make_frequency_series

def make_frequency_series(vec):
    """Return a frequency series of the input vector.

    If the input is a frequency series it is returned, else if the input
    vector is a real time series it is fourier transformed and returned as a 
    frequency series. 
    
    Parameters
    ----------
    vector : TimeSeries or FrequencySeries  

    Returns
    -------
    Frequency Series: FrequencySeries
        A frequency domain version of the input vector.
    """
    if isinstance(vec, FrequencySeries):
        return vec
    if isinstance(vec, TimeSeries):
        N = len(vec)
        n = N/2+1    
        delta_f = 1.0 / N / vec.delta_t
        vectilde =  FrequencySeries(zeros(n, dtype=complex_same_precision_as(vec)), 
                                    delta_f=delta_f, copy=False)
        fft(vec, vectilde)   
        return vectilde
    else:
        raise TypeError("Can only convert a TimeSeries to a FrequencySeries")
开发者ID:aravind-pazhayath,项目名称:pycbc,代码行数:28,代码来源:matchedfilter.py


示例3: apply_fd_time_shift

def apply_fd_time_shift(htilde, shifttime, fseries=None, copy=True):
    """Shifts a frequency domain waveform in time. The shift applied is
    shiftime - htilde.epoch.

    Parameters
    ----------
    htilde : FrequencySeries
        The waveform frequency series.
    shifttime : float
        The time to shift the frequency series to.
    fseries : {None, numpy array}
        The frequencies of each element in the the FrequencySeries. If None,
        will use htilde.sample_frequencies. Note: providing a frequency series
        can reduce the exectution time of this function by as much as a 1/2.
    copy : {True, bool}
        Make a copy of htilde before applying the time shift. If False, the time
        shift will be applied to htilde's data.

    Returns
    -------
    FrequencySeries
        A frequency series with the waveform shifted to the new time. If makecopy
        is True, will be a new frequency series; if makecopy is False, will be
        the same as htilde.
    """
    dt = float(shifttime - htilde.epoch)
    if fseries is None:
        fseries = htilde.sample_frequencies.numpy()
    shift = Array(numpy.exp(-2j*numpy.pi*dt*fseries), 
                dtype=complex_same_precision_as(htilde))
    if copy:
        htilde = 1. * htilde
    htilde *= shift
    return htilde
开发者ID:lppekows,项目名称:pycbc,代码行数:34,代码来源:utils.py


示例4: bandlimited_interpolate

def bandlimited_interpolate(series, delta_f):
    """Return a new PSD that has been interpolated to the desired delta_f.

    Parameters
    ----------
    series : FrequencySeries
        Frequency series to be interpolated.
    delta_f : float
        The desired delta_f of the output

    Returns
    -------
    interpolated series : FrequencySeries
        A new FrequencySeries that has been interpolated.
    """
    series = FrequencySeries(series, dtype=complex_same_precision_as(series), delta_f=series.delta_f)

    N = (len(series) - 1) * 2
    delta_t = 1.0 / series.delta_f / N

    new_N = int(1.0 / (delta_t * delta_f))
    new_n = new_N / 2 + 1

    series_in_time = TimeSeries(zeros(N), dtype=real_same_precision_as(series), delta_t=delta_t)
    ifft(series, series_in_time)

    padded_series_in_time = TimeSeries(zeros(new_N), dtype=series_in_time.dtype, delta_t=delta_t)
    padded_series_in_time[0:N/2] = series_in_time[0:N/2]
    padded_series_in_time[new_N-N/2:new_N] = series_in_time[N/2:N]

    interpolated_series = FrequencySeries(zeros(new_n), dtype=series.dtype, delta_f=delta_f)
    fft(padded_series_in_time, interpolated_series)

    return interpolated_series
开发者ID:johnveitch,项目名称:pycbc,代码行数:34,代码来源:estimate.py


示例5: td_waveform_to_fd_waveform

def td_waveform_to_fd_waveform(waveform, out=None, length=None,
                               buffer_length=100):
    """ Convert a time domain into a frequency domain waveform by FFT.
        As a waveform is assumed to "wrap" in the time domain one must be
        careful to ensure the waveform goes to 0 at both "boundaries". To
        ensure this is done correctly the waveform must have the epoch set such
        the merger time is at t=0 and the length of the waveform should be
        shorter than the desired length of the FrequencySeries (times 2 - 1)
        so that zeroes can be suitably pre- and post-pended before FFTing.
        If given, out is a memory array to be used as the output of the FFT.
        If not given memory is allocated internally.
        If present the length of the returned FrequencySeries is determined
        from the length out. If out is not given the length can be provided
        expicitly, or it will be chosen as the nearest power of 2. If choosing
        length explicitly the waveform length + buffer_length is used when
        choosing the nearest binary number so that some zero padding is always
        added.
    """
    # Figure out lengths and set out if needed
    if out is None:
        if length is None:
            N = pnutils.nearest_larger_binary_number(len(waveform) + \
                                                     buffer_length)
            n = int(N//2) + 1
        else:
            n = length
            N = (n-1)*2
        out = zeros(n, dtype=complex_same_precision_as(waveform))
    else:
        n = len(out)
        N = (n-1)*2
    delta_f =  1. / (N * waveform.delta_t)

    # total duration of the waveform
    tmplt_length = len(waveform) * waveform.delta_t
    if len(waveform) > N:
        err_msg = "The time domain template is longer than the intended "
        err_msg += "duration in the frequency domain. This situation is "
        err_msg += "not supported in this function. Please shorten the "
        err_msg += "waveform appropriately before calling this function or "
        err_msg += "increase the allowed waveform length. "
        err_msg += "Waveform length (in samples): {}".format(len(waveform))
        err_msg += ". Intended length: {}.".format(N)
        raise ValueError(err_msg)
    # for IMR templates the zero of time is at max amplitude (merger)
    # thus the start time is minus the duration of the template from
    # lower frequency cutoff to merger, i.e. minus the 'chirp time'
    tChirp = - float( waveform.start_time )  # conversion from LIGOTimeGPS
    waveform.resize(N)
    k_zero = int(waveform.start_time / waveform.delta_t)
    waveform.roll(k_zero)
    htilde = FrequencySeries(out, delta_f=delta_f, copy=False)
    fft(waveform.astype(real_same_precision_as(htilde)), htilde)
    htilde.length_in_time = tmplt_length
    htilde.chirp_length = tChirp
    return htilde
开发者ID:bhooshan-gadre,项目名称:pycbc,代码行数:56,代码来源:waveform.py


示例6: make_padded_frequency_series

def make_padded_frequency_series(vec, filter_N=None, delta_f=None):
    """Convert vec (TimeSeries or FrequencySeries) to a FrequencySeries. If
    filter_N and/or delta_f are given, the output will take those values. If
    not told otherwise the code will attempt to pad a timeseries first such that
    the waveform will not wraparound. However, if delta_f is specified to be
    shorter than the waveform length then wraparound *will* be allowed.
    """
    if filter_N is None:
        power = ceil(log(len(vec), 2)) + 1
        N = 2 ** power
    else:
        N = filter_N
    n = N / 2 + 1

    if isinstance(vec, FrequencySeries):
        vectilde = FrequencySeries(zeros(n, dtype=complex_same_precision_as(vec)),
                                   delta_f=1.0, copy=False)
        if len(vectilde) < len(vec):
            cplen = len(vectilde)
        else:
            cplen = len(vec)
        vectilde[0:cplen] = vec[0:cplen]
        delta_f = vec.delta_f

    elif isinstance(vec, TimeSeries):
        # First determine if the timeseries is too short for the specified df
        # and increase if necessary
        curr_length = len(vec)
        new_length = int(nearest_larger_binary_number(curr_length))
        while new_length * vec.delta_t < 1./delta_f:
            new_length = new_length * 2
        vec.resize(new_length)
        # Then convert to frequencyseries
        v_tilde = vec.to_frequencyseries()
        # Then convert frequencyseries to required length and spacing by keeping
        # only every nth sample if delta_f needs increasing, and cutting at
        # Nyquist if the max frequency is too high.
        # NOTE: This assumes that the input and output data is using binary
        #       lengths.
        i_delta_f = v_tilde.get_delta_f()
        v_tilde = v_tilde.numpy()
        df_ratio = int(delta_f / i_delta_f)
        n_freq_len = int((n-1) * df_ratio +1)
        assert(n <= len(v_tilde))
        df_ratio = int(delta_f / i_delta_f)
        v_tilde = v_tilde[:n_freq_len:df_ratio]
        vectilde = FrequencySeries(v_tilde, delta_f=delta_f, dtype=complex128)

    return FrequencySeries(vectilde * DYN_RANGE_FAC, delta_f=delta_f,
                           dtype=complex128)
开发者ID:a-r-williamson,项目名称:pycbc,代码行数:50,代码来源:test_skymax.py


示例7: lfilter

def lfilter(coefficients, timeseries):
    """ Apply filter coefficients to a time series
    
    Parameters
    ----------
    coefficients: numpy.ndarray
        Filter coefficients to apply
    timeseries: numpy.ndarray
        Time series to be filtered.

    Returns
    -------
    tseries: numpy.ndarray
        filtered array
    """
    from pycbc.fft import fft, ifft
    from pycbc.filter import correlate

    # If there aren't many points just use the default scipy method
    if len(timeseries) < 2**7:
        if hasattr(timeseries, 'numpy'):
            timeseries = timeseries.numpy()
        series = scipy.signal.lfilter(coefficients, 1.0, timeseries)
        return series
    else:
        cseries = (Array(coefficients[::-1] * 1)).astype(timeseries.dtype)
        cseries.resize(len(timeseries))
        cseries.roll(len(timeseries) - len(coefficients) + 1)
        timeseries = Array(timeseries, copy=False)

        flen = len(cseries) / 2 + 1
        ftype = complex_same_precision_as(timeseries)

        cfreq = zeros(flen, dtype=ftype)
        tfreq = zeros(flen, dtype=ftype)

        fft(Array(cseries), cfreq)
        fft(Array(timeseries), tfreq)

        cout = zeros(flen, ftype)
        out = zeros(len(timeseries), dtype=timeseries)

        correlate(cfreq, tfreq, cout)   
        ifft(cout, out)

        return out.numpy()  / len(out)
开发者ID:millsjc,项目名称:pycbc,代码行数:46,代码来源:resample.py


示例8: match

def match(vec1, vec2, psd=None, low_frequency_cutoff=None,
          high_frequency_cutoff=None, v1_norm=None, v2_norm=None):
    """ Return the match between the two TimeSeries or FrequencySeries.
    
    Return the match between two waveforms. This is equivelant to the overlap 
    maximized over time and phase. 

    Parameters
    ----------
    vec1 : TimeSeries or FrequencySeries 
        The input vector containing a waveform.
    vec2 : TimeSeries or FrequencySeries 
        The input vector containing a waveform.
    psd : Frequency Series
        A power spectral density to weight the overlap.
    low_frequency_cutoff : {None, float}, optional
        The frequency to begin the match.
    high_frequency_cutoff : {None, float}, optional
        The frequency to stop the match.
    v1_norm : {None, float}, optional
        The normalization of the first waveform. This is equivalent to its
        sigmasq value. If None, it is internally calculated. 
    v2_norm : {None, float}, optional
        The normalization of the second waveform. This is equivalent to its
        sigmasq value. If None, it is internally calculated. 
    Returns
    -------
    match: float
    """

    htilde = make_frequency_series(vec1)
    stilde = make_frequency_series(vec2)

    N = (len(htilde)-1) * 2

    global _snr
    if _snr is None or _snr.dtype != htilde.dtype or len(_snr) != N:
        _snr = zeros(N,dtype=complex_same_precision_as(vec1))
    snr, corr, snr_norm = matched_filter_core(htilde,stilde,psd,low_frequency_cutoff,
                             high_frequency_cutoff, v1_norm, out=_snr)
    maxsnr, max_id = snr.abs_max_loc()
    if v2_norm is None:
        v2_norm = sigmasq(stilde, psd, low_frequency_cutoff, high_frequency_cutoff)
    return maxsnr * snr_norm / sqrt(v2_norm), max_id
开发者ID:aravind-pazhayath,项目名称:pycbc,代码行数:44,代码来源:matchedfilter.py


示例9: apply_fd_time_shift

def apply_fd_time_shift(htilde, shifttime, kmin=0, fseries=None, copy=True):
    """Shifts a frequency domain waveform in time. The shift applied is
    shiftime - htilde.epoch.

    Parameters
    ----------
    htilde : FrequencySeries
        The waveform frequency series.
    shifttime : float
        The time to shift the frequency series to.
    kmin : {0, int}
        The starting index of htilde to apply the time shift. Default is 0.
    fseries : {None, numpy array}
        The frequencies of each element in htilde. This is only needed if htilde is not
        sampled at equal frequency steps.
    copy : {True, bool}
        Make a copy of htilde before applying the time shift. If False, the time
        shift will be applied to htilde's data.

    Returns
    -------
    FrequencySeries
        A frequency series with the waveform shifted to the new time. If makecopy
        is True, will be a new frequency series; if makecopy is False, will be
        the same as htilde.
    """
    dt = float(shifttime - htilde.epoch)
    if dt == 0.:
        # no shift to apply, just copy if desired
        if copy:
            htilde = 1. * htilde
    elif isinstance(htilde, FrequencySeries):
        # FrequencySeries means equally sampled in frequency, use faster shifting
        htilde = apply_fseries_time_shift(htilde, dt, kmin=kmin, copy=copy)
    else:
        if fseries is None:
            fseries = htilde.sample_frequencies.numpy()
        shift = Array(numpy.exp(-2j*numpy.pi*dt*fseries),
                    dtype=complex_same_precision_as(htilde))
        if copy:
            htilde = 1. * htilde
        htilde *= shift
    return htilde
开发者ID:a-r-williamson,项目名称:pycbc,代码行数:43,代码来源:utils.py


示例10: frequency_noise_from_psd

def frequency_noise_from_psd(psd, seed=None):
    """ Create noise with a given psd.

    Return noise coloured with the given psd. The returned noise
    FrequencySeries has the same length and frequency step as the given psd.
    Note that if unique noise is desired a unique seed should be provided.

    Parameters
    ----------
    psd : FrequencySeries
        The noise weighting to color the noise.
    seed : {0, int} or None
        The seed to generate the noise. If None specified,
        the seed will not be reset.

    Returns
    --------
    noise : FrequencySeriesSeries
        A FrequencySeries containing gaussian noise colored by the given psd.
    """
    sigma = 0.5 * (psd / psd.delta_f) ** (0.5)
    if seed is not None:
        numpy.random.seed(seed)
    sigma = sigma.numpy()
    dtype = complex_same_precision_as(psd)

    not_zero = (sigma != 0)

    sigma_red = sigma[not_zero]
    noise_re = numpy.random.normal(0, sigma_red)
    noise_co = numpy.random.normal(0, sigma_red)
    noise_red = noise_re + 1j * noise_co

    noise = numpy.zeros(len(sigma), dtype=dtype)
    noise[not_zero] = noise_red

    return FrequencySeries(noise,
                           delta_f=psd.delta_f,
                           dtype=dtype)
开发者ID:a-r-williamson,项目名称:pycbc,代码行数:39,代码来源:gaussian.py


示例11: inverse_spectrum_truncation

def inverse_spectrum_truncation(psd, max_filter_len, low_frequency_cutoff=None, trunc_method=None):
    """Modify a PSD such that the impulse response associated with its inverse
    square root is no longer than `max_filter_len` time samples. In practice
    this corresponds to a coarse graining or smoothing of the PSD.

    Parameters
    ----------
    psd : FrequencySeries
        PSD whose inverse spectrum is to be truncated.
    max_filter_len : int
        Maximum length of the time-domain filter in samples.
    low_frequency_cutoff : {None, int}
        Frequencies below `low_frequency_cutoff` are zeroed in the output.
    trunc_method : {None, 'hann'}
        Function used for truncating the time-domain filter.
        None produces a hard truncation at `max_filter_len`.

    Returns
    -------
    psd : FrequencySeries
        PSD whose inverse spectrum has been truncated.

    Raises
    ------
    ValueError
        For invalid types or values of `max_filter_len` and `low_frequency_cutoff`.

    Notes
    -----
    See arXiv:gr-qc/0509116 for details.
    """
    # sanity checks
    if type(max_filter_len) is not int or max_filter_len <= 0:
        raise ValueError('max_filter_len must be a positive integer')
    if low_frequency_cutoff is not None and low_frequency_cutoff < 0 \
        or low_frequency_cutoff > psd.sample_frequencies[-1]:
        raise ValueError('low_frequency_cutoff must be within the bandwidth of the PSD')

    N = (len(psd)-1)*2

    inv_asd = FrequencySeries((1. / psd)**0.5, delta_f=psd.delta_f, \
        dtype=complex_same_precision_as(psd))
        
    inv_asd[0] = 0
    inv_asd[N/2] = 0
    q = TimeSeries(numpy.zeros(N), delta_t=(N / psd.delta_f), \
        dtype=real_same_precision_as(psd))

    if low_frequency_cutoff:
        kmin = int(low_frequency_cutoff / psd.delta_f)
        inv_asd[0:kmin] = 0

    ifft(inv_asd, q)
    
    trunc_start = max_filter_len / 2
    trunc_end = N - max_filter_len / 2

    if trunc_method == 'hann':
        trunc_window = Array(numpy.hanning(max_filter_len), dtype=q.dtype)
        q[0:trunc_start] *= trunc_window[max_filter_len/2:max_filter_len]
        q[trunc_end:N] *= trunc_window[0:max_filter_len/2]

    q[trunc_start:trunc_end] = 0
    psd_trunc = FrequencySeries(numpy.zeros(len(psd)), delta_f=psd.delta_f, \
                                dtype=complex_same_precision_as(psd))
    fft(q, psd_trunc)
    psd_trunc *= psd_trunc.conj()
    psd_out = 1. / abs(psd_trunc)

    return psd_out
开发者ID:johnveitch,项目名称:pycbc,代码行数:70,代码来源:estimate.py


示例12: power_chisq_from_precomputed

def power_chisq_from_precomputed(corr, snr, snr_norm, bins, indices=None):
    """Calculate the chisq timeseries from precomputed values

    This function calculates the chisq at all times by performing an
    inverse FFT of each bin.

    Parameters
    ----------

    corr: FrequencySeries
        The produce of the template and data in the frequency domain.
    snr: TimeSeries
        The unnormalized snr time series.
    snr_norm:
        The snr normalization factor (true snr = snr * snr_norm) EXPLAINME - define 'true snr'?
    bins: List of integers
        The edges of the chisq bins.
    indices: {Array, None}, optional
        Index values into snr that indicate where to calculate
        chisq values. If none, calculate chisq for all possible indices.

    Returns
    -------
    chisq: TimeSeries
    """
    # Get workspace memory
    global _q_l, _qtilde_l, _chisq_l

    if _q_l is None or len(_q_l) != len(snr):
        q = zeros(len(snr), dtype=complex_same_precision_as(snr))
        qtilde = zeros(len(snr), dtype=complex_same_precision_as(snr))
        _q_l = q
        _qtilde_l = qtilde
    else:
        q = _q_l
        qtilde = _qtilde_l

    if indices is not None:
        snr = snr.take(indices)

    if _chisq_l is None or len(_chisq_l) < len(snr):
        chisq = zeros(len(snr), dtype=real_same_precision_as(snr))
        _chisq_l = chisq
    else:
        chisq = _chisq_l[0:len(snr)]
        chisq.clear()

    num_bins = len(bins) - 1

    for j in range(num_bins):
        k_min = int(bins[j])
        k_max = int(bins[j+1])

        qtilde[k_min:k_max] = corr[k_min:k_max]
        pycbc.fft.ifft(qtilde, q)
        qtilde[k_min:k_max].clear()

        if indices is not None:
            chisq_accum_bin(chisq, q.take(indices))
        else:
            chisq_accum_bin(chisq, q)

    chisq = (chisq * num_bins - snr.squared_norm()) * (snr_norm ** 2.0)

    if indices is not None:
        return chisq
    else:
        return TimeSeries(chisq, delta_t=snr.delta_t, epoch=snr.start_time, copy=False)
开发者ID:hthrfong,项目名称:pycbc,代码行数:68,代码来源:chisq.py


示例13: matched_filter_core

def matched_filter_core(template, data, psd=None, low_frequency_cutoff=None,
                  high_frequency_cutoff=None, h_norm=None, out=None, corr_out=None):
    """ Return the complex snr and normalization. 
    
    Return the complex snr, along with its associated normalization of the template,
    matched filtered against the data. 

    Parameters
    ----------
    template : TimeSeries or FrequencySeries 
        The template waveform
    data : TimeSeries or FrequencySeries 
        The strain data to be filtered.
    psd : {FrequencySeries}, optional
        The noise weighting of the filter.
    low_frequency_cutoff : {None, float}, optional
        The frequency to begin the filter calculation. If None, begin at the
        first frequency after DC.
    high_frequency_cutoff : {None, float}, optional
        The frequency to stop the filter calculation. If None, continue to the 
        the nyquist frequency.
    h_norm : {None, float}, optional
        The template normalization. If none, this value is calculated internally.
    out : {None, Array}, optional
        An array to use as memory for snr storage. If None, memory is allocated 
        internally.
    corr_out : {None, Array}, optional
        An array to use as memory for correlation storage. If None, memory is allocated 
        internally. If provided, management of the vector is handled externally by the
        caller. No zero'ing is done internally. 

    Returns
    -------
    snr : TimeSeries
        A time series containing the complex snr. 
    corrrelation: FrequencySeries
        A frequency series containing the correlation vector. 
    norm : float
        The normalization of the complex snr.  
    """
    if corr_out is not None:
        _qtilde = corr_out
    else:
        global _qtilde_t
        _qtilde = _qtilde_t
  
    htilde = make_frequency_series(template)
    stilde = make_frequency_series(data)

    if len(htilde) != len(stilde):
        raise ValueError("Length of template and data must match")

    N = (len(stilde)-1) * 2   
    kmin, kmax = get_cutoff_indices(low_frequency_cutoff,
                                   high_frequency_cutoff, stilde.delta_f, N)   

    if out is None:
        _q = zeros(N, dtype=complex_same_precision_as(data))
    elif (len(out) == N) and type(out) is Array and out.kind =='complex':
        _q = out
    else:
        raise TypeError('Invalid Output Vector: wrong length or dtype')
        
    if corr_out:
        pass
    elif (_qtilde is None) or (len(_qtilde) != N) or _qtilde.dtype != data.dtype:
        _qtilde_t = _qtilde = zeros(N, dtype=complex_same_precision_as(data))
    else:
        _qtilde.clear()         
    
    correlate(htilde[kmin:kmax], stilde[kmin:kmax], _qtilde[kmin:kmax])

    if psd is not None:
        if isinstance(psd, FrequencySeries):
            if psd.delta_f == stilde.delta_f :
                _qtilde[kmin:kmax] /= psd[kmin:kmax]
            else:
                raise TypeError("PSD delta_f does not match data")
        else:
            raise TypeError("PSD must be a FrequencySeries")
            
    ifft(_qtilde, _q)
    
    if h_norm is None:
        h_norm = sigmasq(htilde, psd, low_frequency_cutoff, high_frequency_cutoff)     

    norm = (4.0 * stilde.delta_f) / sqrt( h_norm)
    delta_t = 1.0 / (N * stilde.delta_f)
    
    return (TimeSeries(_q, epoch=stilde._epoch, delta_t=delta_t, copy=False), 
           FrequencySeries(_qtilde, epoch=stilde._epoch, delta_f=htilde.delta_f, copy=False), 
           norm)
开发者ID:aravind-pazhayath,项目名称:pycbc,代码行数:92,代码来源:matchedfilter.py


示例14: detect_loud_glitches

def detect_loud_glitches(strain, psd_duration=16, psd_stride=8,
                         psd_avg_method='median', low_freq_cutoff=30.,
                         threshold=50., cluster_window=5., corrupted_time=4.,
                         high_freq_cutoff=None, output_intermediates=False):
    """Automatic identification of loud transients for gating purposes."""

    if output_intermediates:
        strain.save_to_wav('strain_conditioned.wav')

    # don't waste time trying to optimize a single FFT
    pycbc.fft.fftw.set_measure_level(0)

    logging.info('Autogating: estimating PSD')
    psd = pycbc.psd.welch(strain, seg_len=psd_duration*strain.sample_rate,
                          seg_stride=psd_stride*strain.sample_rate,
                          avg_method=psd_avg_method)

    logging.info('Autogating: time -> frequency')
    strain_tilde = FrequencySeries(numpy.zeros(len(strain) / 2 + 1),
                                   delta_f=1./strain.duration,
                                   dtype=complex_same_precision_as(strain))
    pycbc.fft.fft(strain, strain_tilde)

    logging.info('Autogating: interpolating PSD')
    psd = pycbc.psd.interpolate(psd, strain_tilde.delta_f)

    logging.info('Autogating: whitening')
    if high_freq_cutoff:
        kmax = int(high_freq_cutoff / strain_tilde.delta_f)
        strain_tilde[kmax:] = 0.
        norm = high_freq_cutoff - low_freq_cutoff
    else:
        norm = strain.sample_rate/2. - low_freq_cutoff
    strain_tilde /= (psd * norm) ** 0.5
    kmin = int(low_freq_cutoff / strain_tilde.delta_f)
    strain_tilde[0:kmin] = 0.

    # FIXME at this point the strain can probably be downsampled

    logging.info('Autogating: frequency -> time')
    pycbc.fft.ifft(strain_tilde, strain)

    pycbc.fft.fftw.set_measure_level(pycbc.fft.fftw._default_measurelvl)

    logging.info('Autogating: stdev of whitened strain is %.4f', numpy.std(strain))

    if output_intermediates:
        strain.save_to_wav('strain_whitened.wav')

    mag = abs(strain)
    if output_intermediates:
        mag.save('strain_whitened_mag.npy')
    mag = numpy.array(mag, dtype=numpy.float32)

    # remove corrupted strain at the ends
    corrupted_idx = int(corrupted_time * strain.sample_rate)
    mag[0:corrupted_idx] = 0
    mag[-1:-corrupted_idx-1:-1] = 0

    logging.info('Autogating: finding loud peaks')
    indices = numpy.where(mag > threshold)[0]
    cluster_idx = pycbc.events.findchirp_cluster_over_window(
            indices, mag[indices], int(cluster_window*strain.sample_rate))
    times = [idx * strain.delta_t + strain.start_time \
             for idx in indices[cluster_idx]]
    return times
开发者ID:benjaminlackey,项目名称:pycbc,代码行数:66,代码来源:strain.py


示例15: colored_noise

def colored_noise(psd, start_time, end_time, seed=0, low_frequency_cutoff=1.0):
    """ Create noise from a PSD

    Return noise from the chosen PSD. Note that if unique noise is desired
    a unique seed should be provided.

    Parameters
    ----------
    psd : pycbc.types.FrequencySeries
        PSD to color the noise
    start_time : int
        Start time in GPS seconds to generate noise
    end_time : int
        End time in GPS seconds to generate nosie
    seed : {None, int}
        The seed to generate the noise.
    low_frequency_cutof : {1.0, float}
        The low frequency cutoff to pass to the PSD generation.

    Returns
    --------
    noise : TimeSeries
        A TimeSeries containing gaussian noise colored by the given psd.
    """
    psd = psd.copy()

    flen = int(SAMPLE_RATE / psd.delta_f) / 2 + 1
    oldlen = len(psd)
    psd.resize(flen)

    # Want to avoid zeroes in PSD.
    max_val = psd.max()
    for i in xrange(len(psd)):
        if i >= (oldlen-1):
            psd.data[i] = psd[oldlen - 2]
        if psd[i] == 0:
            psd.data[i] = max_val

    wn_dur = int(end_time - start_time) + 2*FILTER_LENGTH
    if psd.delta_f >= 1. / (2.*FILTER_LENGTH):
        # If the PSD is short enough, this method is less memory intensive than
        # resizing and then calling inverse_spectrum_truncation
        psd = pycbc.psd.interpolate(psd, 1.0 / (2.*FILTER_LENGTH))
        # inverse_spectrum_truncation truncates the inverted PSD. To truncate
        # the non-inverted PSD we give it the inverted PSD to truncate and then
        # invert the output.
        psd = 1. / pycbc.psd.inverse_spectrum_truncation(1./psd,
                                FILTER_LENGTH * SAMPLE_RATE,
                                low_frequency_cutoff=low_frequency_cutoff,
                                trunc_method='hann')
        psd = psd.astype(complex_same_precision_as(psd))
        # Zero-pad the time-domain PSD to desired length. Zeroes must be added
        # in the middle, so some rolling between a resize is used.
        psd = psd.to_timeseries()
        psd.roll(SAMPLE_RATE * FILTER_LENGTH)
        psd.resize(wn_dur * SAMPLE_RATE)
        psd.roll(-SAMPLE_RATE * FILTER_LENGTH)
        # As time series is still mirrored the complex frequency components are
        # 0. But convert to real by using abs as in inverse_spectrum_truncate
        psd = psd.to_frequencyseries()
    else:
        psd = pycbc.psd.interpolate(psd, 1.0 / wn_dur)
        psd = 1. / pycbc.psd.inverse_spectrum_truncation(1./psd,
                                FILTER_LENGTH * SAMPLE_RATE,
                                low_frequency_cutoff=low_frequency_cutoff,
                                trunc_method='hann')

    kmin = int(low_frequency_cutoff / psd.delta_f)
    psd[:kmin].clear()
    asd = (psd.real())**0.5
    del psd

    white_noise = normal(start_time - FILTER_LENGTH, end_time + FILTER_LENGTH,
                         seed=seed)
    white_noise = white_noise.to_frequencyseries()
    # Here we color. Do not want to duplicate memory here though so use '*='
    white_noise *= asd
    del asd
    colored = white_noise.to_timeseries()
    del white_noise
    return colored.time_slice(start_time, end_time)
开发者ID:a-r-williamson,项目名称:pycbc,代码行数:81,代码来源:reproduceable.py


示例16: values

    def values(self, corr_plus, corr_cross, snrv, psd,
               indices, template_plus, template_cross, u_vals,
               hplus_cross_corr, hpnorm, hcnorm):
        """ Calculate the chisq at points given by indices.

        Returns
        -------
        chisq: Array
            Chisq values, one for each sample index

        chisq_dof: Array
            Number of statistical degrees of freedom for the chisq test
            in the given template
        """
        if self.do:
            logging.info("...Doing power chisq")

            num_above = len(indices)
            if self.snr_threshold:
                above = abs(snrv) > self.snr_threshold
                num_above = above.sum()
                logging.info('%s above chisq activation threshold' % num_above)
                above_indices = indices[above]
                above_snrv = snrv[above]
                rchisq = numpy.zeros(len(indices), dtype=numpy.float32)
                dof = -100
            else:
                above_indices = indices
                above_snrv = snrv

            if num_above > 0:
                chisq = []
                curr_tmplt_mult_fac = 0.
                curr_corr_mult_fac = 0.
                if self.template_mem is None or \
                        (not len(self.template_mem) == len(template_plus)):
                    self.template_mem = zeros(len(template_plus),
                                dtype=complex_same_precision_as(corr_plus))
                if self.corr_mem is None or \
                                (not len(self.corr_mem) == len(corr_plus)):
                    self.corr_mem = zeros(len(corr_plus),
                                dtype=complex_same_precision_as(corr_plus))

                tmplt_data = template_cross.data
                corr_data = corr_cross.data
                numpy.copyto(self.template_mem.data, template_cross.data)
                numpy.copyto(self.corr_mem.data, corr_cross.data)
                template_cross._data = self.template_mem.data
                corr_cross._data = self.corr_mem.data

                for lidx, index in enumerate(above_indices):
                    above_local_indices = numpy.array([index])
                    above_local_snr = numpy.array([above_snrv[lidx]])
                    local_u_val = u_vals[lidx]
                    # Construct template from _plus and _cross
                    # Note that this modifies in place, so we store that and
                    # revert on the next pass.
                    template = template_cross.multiply_and_add(template_plus,
                                               local_u_val-curr_tmplt_mult_fac)
                    curr_tmplt_mult_fac = local_u_val

                    template.f_lower = template_plus.f_lower
                    template.params = template_plus.params
                    # Construct the corr vector
                    norm_fac = local_u_val*local_u_val + 1
                    norm_fac += 2 * local_u_val * hplus_cross_corr
                    norm_fac = hcnorm / (norm_fac**0.5)
                    hp_fac = local_u_val * hpnorm / hcnorm
                    corr = corr_cross.multiply_and_add(corr_plus,
                                                   hp_fac - curr_corr_mult_fac)
                    curr_corr_mult_fac = hp_fac

                    bins = self.calculate_chisq_bins(template, psd)
                    dof = (len(bins) - 1) * 2 - 2
                    curr_chisq = power_chisq_at_points_from_precomputed(corr,
                                          above_local_snr/ norm_fac, norm_fac,
                                          bins, above_local_indices)
                    chisq.append(curr_chisq[0])
                chisq = numpy.array(chisq)
                # Must reset corr and template to original values!
                template_cross._data = tmplt_data
                corr_cross._data = corr_data

            if self.snr_threshold:
                if num_above > 0:
                    rchisq[above] = chisq
            else:
                rchisq = chisq

            return rchisq, numpy.repeat(dof, len(indices))# dof * numpy.ones_like(indices)
        else:
            return None, None
开发者ID:johnveitch,项目名称:pycbc,代码行数:92,代码来源:chisq.py


示例17: detect_loud_glitches

def detect_loud_glitches(strain, psd_duration=4., psd_stride=2.,
                         psd_avg_method='median', low_freq_cutoff=30.,
                         threshold=50., cluster_window=5., corrupted_time=4.,
                         high_freq_cutoff=None, output_intermediates=False):
    """Automatic identification of loud transients for gating purposes."""

    logging.info('Autogating: tapering strain')
    fade_size = corrupted_time * strain.sample_rate
    w = numpy.arange(fade_size) / float(fade_size)
    strain[0:fade_size] *= Array(w, dtype=strain.dtype)
    strain[(len(strain)-fade_size):] *= Array(w[::-1], dtype=strain.dtype)

    if output_intermediates:
        strain.save_to_wav('strain_conditioned.wav')

    # don't waste time trying to optimize a single FFT
    pycbc.fft.fftw.set_measure_level(0)

    logging.info('Autogating: estimating PSD')
    psd = pycbc.psd.welch(strain, seg_len=int(psd_duration*strain.sample_rate),
                          seg_stride=int(psd_stride*strain.sample_rate),
                          avg_method=psd_avg_method)
    psd = pycbc.psd.interpolate(psd, 1./strain.duration)
    psd = pycbc.psd.inverse_spectrum_truncation(
            psd, int(psd_duration * strain.sample_rate),
            low_frequency_cutoff=low_freq_cutoff,
            trunc_method='hann')

    logging.info('Autogating: time -> frequency')
    strain_tilde = FrequencySeries(numpy.zeros(len(strain) / 2 + 1),
                                   delta_f=psd.delta_f,
                                   dtype=complex_same_precision_as(strain))
    pycbc.fft.fft(strain, strain_tilde)

    logging.info('Autogating: whitening')
    if high_freq_cutoff:
        kmax = int(high_freq_cutoff / strain_tilde.delta_f)
        strain_tilde[kmax:] = 0.
        norm = high_freq_cutoff - low_freq_cutoff
    else:
        norm = strain.sample_rate/2. - low_freq_cutoff
    strain_tilde /= (psd * norm) ** 0.5
    kmin = int(low_freq_cutoff / strain_tilde.delta_f)
    strain_tilde[0:kmin] = 0.

    # FIXME downsample here rather than post-FFT

    logging.info('Autogating: frequency -> time')
    pycbc.fft.ifft(strain_tilde, strain)

    pycbc.fft.fftw.set_measure_level(pycbc.fft.fftw._default_measurelvl)

    if high_freq_cutoff:
        strain = resample_to_delta_t(strain, 0.5 / high_freq_cutoff,
                                     method='ldas')

    logging.info('Autogating: stdev of whitened strain is %.4f', numpy.std(strain))

    if output_intermediates:
        strain.save_to_wav('strain_whitened.wav')

    mag = abs(strain)
    if output_intermediates:
        mag.save('strain_whitened_mag.npy')
    mag = numpy.array(mag, dtype=numpy.float32)

    # remove strain corrupted by filters at the ends
    corrupted_idx = int(corrupted_time * strain.sample_rate)
    mag[0:corrupted_idx] = 0
    mag[-1:-corrupted_idx-1:-1] = 0

    logging.info('Autogating: finding loud peaks')
    indices = numpy.where(mag > threshold)[0]
    cluster_idx = pycbc.events.findchirp_cluster_over_window(
            indices, mag[indices], int(cluster_window*strain.sample_rate))
    times = [idx * strain.delta_t + strain.start_time \
             for idx in indices[cluster_idx]]
    return times
开发者ID:safeenharis,项目名称:pycbc,代码行数:78,代码来源:strain.py



注:本文中的pycbc.types.complex_same_precision_as函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python types.real_same_precision_as函数代码示例发布时间:2022-05-25
下一篇:
Python pnutils.mass1_mass2_to_mchirp_eta函数代码示例发布时间:2022-05-25
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap