• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Python time.process_time函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中time.process_time函数的典型用法代码示例。如果您正苦于以下问题:Python process_time函数的具体用法?Python process_time怎么用?Python process_time使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了process_time函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: transform_source

def transform_source(input_source, *, options=None, query_options=None):
    """Take in the Python source code to a module and return the
    transformed source code and the symbol table.
    """
    tree = P.Parser.p(input_source)
    
    t1 = process_time()
    tree, symtab = transform_ast(tree, options=options,
                                 query_options=query_options)
    t2 = process_time()
    
    source = P.Parser.ts(tree)
    # All good human beings have trailing newlines in their
    # text files.
    source = source + '\n'
    
    symtab.stats['lines'] = get_loc_source(source)
    # L.tree_size() is for IncASTs, but it should also work for
    # Python ASTs. We have to re-parse the source to get rid of
    # our Comment pseudo-nodes.
    tree = P.Parser.p(source)
    symtab.stats['ast_nodes'] = L.tree_size(tree)
    symtab.stats['time'] = t2 - t1
    
    return source, symtab
开发者ID:jieaozhu,项目名称:dist_lang_reviews,代码行数:25,代码来源:apply.py


示例2: main

def main(args):
    running = True

    # Initialize the object store.
    store = objstore.ObjStore()

    # XXX: I don't know if this is a good method for measuring time delay; it
    # may only count process time, not including sleeps.
    curtime = time.process_time()

    while running:
        newtime = time.process_time()
        timediff = curtime - newtime
        curtime = newtime

        changed = list(store.changed())
        if len(changed):
            # XXX: have to use a hack to get the length; it's an iterator, not a
            # list, so we can just sum the elements.
            print("Changed: %d" % len(changed))

        if random.random() < 0.000001:
            print("Adding new object")
            obj.RealObj(1, 1, 1, objstore=store)

    return 0
开发者ID:umbc-hackafe,项目名称:satellite-game,代码行数:26,代码来源:main.py


示例3: startProcessingPlugin

def startProcessingPlugin(packageName):
    """ initialize only the Processing components of a plugin """
    global plugins, active_plugins, iface, plugin_times
    start = time.process_time()
    if not _startPlugin(packageName):
        return False

    errMsg = QCoreApplication.translate("Python", "Couldn't load plugin '{0}'").format(packageName)
    if not hasattr(plugins[packageName], 'initProcessing'):
        del plugins[packageName]
        _unloadPluginModules(packageName)
        msg = QCoreApplication.translate("Python", "{0} - plugin has no initProcessing() method").format(errMsg)
        showException(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2], msg, messagebar=True)
        return False

    # initProcessing
    try:
        plugins[packageName].initProcessing()
    except:
        del plugins[packageName]
        _unloadPluginModules(packageName)
        msg = QCoreApplication.translate("Python", "{0} due to an error when calling its initProcessing() method").format(errMsg)
        showException(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2], msg, messagebar=True)
        return False

    end = time.process_time()
    _addToActivePlugins(packageName, end - start)

    return True
开发者ID:m-kuhn,项目名称:QGIS,代码行数:29,代码来源:utils.py


示例4: wrapper

 def wrapper(*args, **kwargs):
     t = time.process_time()
     result = func(*args, **kwargs)
     elapsed_time = time.process_time() - t
     logger.info('function %s executed time: %f s'
                 % (func.__name__, elapsed_time))
     return result
开发者ID:CarefreeLee,项目名称:FeelUOwn,代码行数:7,代码来源:utils.py


示例5: diameter_homegrown

def diameter_homegrown(graph, weights=None):
    """Compute diameter approximation and time needed to compute it.

    Return a tuple (elapsed_time, diam), where elapsed_time is the time (in
    fractional seconds) needed to compute the approximation to the diameter of
    the graph.

    To compute the approximation, we sample a vertex uniformly at random,
    compute the shortest paths from this vertex to all other vertices, and sum
    the lengths of the two longest paths we found. The returned value is an
    upper bound to the diameter of the graph and is at most 2 times the exact
    value.

    If sample_path is True, sample one of the shortest paths computed for the
    approximation, and set it as graph attribute.

    Homegrown version
    
    """
    logging.info("Computing diameter approximation with igraph implementation")
    # time.process_time() does not account for sleeping time. Seems the right
    # function to use. Alternative could be time.perf_counter()
    start_time = time.process_time()
    diam = graph.diameter_approximation(weights)
    end_time =  time.process_time()
    elapsed_time = end_time - start_time

    logging.info("Diameter approximation is %d, computed in %f seconds", diam, elapsed_time)
    graph["approx_diam"] = diam
    graph["approx_diam_time"] = elapsed_time
    return (elapsed_time, diam)
开发者ID:rionda,项目名称:centrsampl,代码行数:31,代码来源:diameter_approx.py


示例6: fit

    def fit(self, train_data, train_labels, val_data, val_labels):
        t_process, t_wall = time.process_time(), time.time()
        sess = tf.Session(graph=self.graph)
        shutil.rmtree(self._get_path('summaries'), ignore_errors=True)
        writer = tf.summary.FileWriter(self._get_path('summaries'), self.graph)
        shutil.rmtree(self._get_path('checkpoints'), ignore_errors=True)
        os.makedirs(self._get_path('checkpoints'))
        path = os.path.join(self._get_path('checkpoints'), 'model')
        sess.run(self.op_init)

        # Training.
        accuracies = []
        losses = []
        indices = collections.deque()
        num_steps = int(self.num_epochs * train_data.shape[0] / self.batch_size)
        for step in range(1, num_steps+1):

            # Be sure to have used all the samples before using one a second time.
            if len(indices) < self.batch_size:
                indices.extend(np.random.permutation(train_data.shape[0]))
            idx = [indices.popleft() for i in range(self.batch_size)]

            batch_data, batch_labels = train_data[idx, :, :, :], train_labels[idx]
            if type(batch_data) is not np.ndarray:
                batch_data = batch_data.toarray()  # convert sparse matrices
            feed_dict = {self.ph_data: batch_data, self.ph_labels: batch_labels, self.ph_dropout: self.dropout}
            learning_rate, loss_average = sess.run([self.op_train, self.op_loss_average], feed_dict)

            # Periodical evaluation of the model.
            if step % self.eval_frequency == 0 or step == num_steps:
                epoch = step * self.batch_size / train_data.shape[0]
                print('step {} / {} (epoch {:.2f} / {}):'.format(step, num_steps, epoch, self.num_epochs))
                print('  learning_rate = {:.2e}, loss_average = {:.2e}'.format(learning_rate, loss_average))

                string, auc, loss, scores_summary = self.evaluate(train_data, train_labels, sess)
                print('  training {}'.format(string))

                string, auc, loss, scores_summary = self.evaluate(val_data, val_labels, sess)
                print('  validation {}'.format(string))
                print('  time: {:.0f}s (wall {:.0f}s)'.format(time.process_time()-t_process, time.time()-t_wall))

                accuracies.append(auc)
                losses.append(loss)

                # Summaries for TensorBoard.
                summary = tf.Summary()
                summary.ParseFromString(sess.run(self.op_summary, feed_dict))
                summary.value.add(tag='validation/auc', simple_value=auc)
                summary.value.add(tag='validation/loss', simple_value=loss)
                writer.add_summary(summary, step)
                
                # Save model parameters (for evaluation).
                self.op_saver.save(sess, path, global_step=step)

        print('validation accuracy: peak = {:.2f}, mean = {:.2f}'.format(max(accuracies), np.mean(accuracies[-10:])))
        writer.close()
        sess.close()
        
        t_step = (time.time() - t_wall) / num_steps
        return accuracies, losses, t_step, scores_summary
开发者ID:parisots,项目名称:gcn_metric_learning,代码行数:60,代码来源:models_siamese.py


示例7: plotruntime

def plotruntime(func, reps, x_arr, singleComponent=False):
    x_y_arr = {}
    for it in range(1,reps):
        for x in x_arr:
            if(singleComponent==True):
                graph = createRandConnectedGraph(x,3*x)
            else:
                graph = createRandomGraph(x,3*x)
            print('x = ', x)
            print("Nodes: %d, vertices: %d" % (x, 3*x))         
            timeStamp = time.process_time() # Start Time
            func(graph) # run p function
            timeLapse = time.process_time() - timeStamp
            print('timeLapse = ', timeLapse)
            
            if it==1: # Add first element, append rest 
                x_y_arr[x] = [timeLapse]
            else:
                x_y_arr[x].append(timeLapse)
       
    # Average runtimes for each x        
    for k in x_y_arr:
        x_y_arr[k] = np.mean(x_y_arr[k])

    # Plot using matplotlib.pyplot
    plt.xlabel('n')
    plt.ylabel('time (in seconds)')
    plt.title('Run times for different n\'s ')
    plt.plot(list(x_y_arr.keys()), list(x_y_arr.values()), 'ro')
    plt.show()
    return x_y_arr    
开发者ID:sanjuw,项目名称:GraphAlgorithms,代码行数:31,代码来源:graph_functions.py


示例8: sudoku_driver

def sudoku_driver(sudoku, expectedSoln=None):
    """
    Driver method that runs the solver, input: unsolved sudoku.
    Optional: expectedSoln, a solution for correctness
    Prints the Original, then the Solution, and Elapsed process_time.
    Raises a ValueError if no solution can be found.
    Note:
        Add a False as an argument for Problem constructor if you
        do not want pruning. e.g Problem(sudoku, False)
    """

    t = time.process_time()

    print("Original Sudoku:\n%s" % printNestedList(sudoku))

    solutionNode = breadth_first_search(Problem(sudoku))

    if solutionNode is None:
        raise(ValueError("No valid soln found."))

    print("Final Solved Sudoku:\n%s" % printNestedList(sudoku))

    print("Solution Branch (upwards from child -> parent): ", end="")
    ptrNode = solutionNode
    while ptrNode.state is not 0:
        print(ptrNode.state, " ", end="")
        ptrNode = ptrNode.parent

    print("\nElapsed time for soln: ", time.process_time() - t)
    if expectedSoln is not None:
        assert(sudoku == expectedSoln)
        print("Solution Matches Expected Solution! \n")
开发者ID:leewc,项目名称:apollo-academia-umn,代码行数:32,代码来源:sudokusolver.py


示例9: test_burn

def test_burn():
    with stats.record_burn('foo', url='http://example.com/'):
        t0 = time.process_time()
        while time.process_time() < t0 + 0.001:
            pass

    assert stats.burners['foo']['count'] == 1
    assert stats.burners['foo']['time'] > 0 and stats.burners['foo']['time'] < 0.3
    assert 'list' not in stats.burners['foo']  # first burn never goes on the list

    with stats.record_burn('foo', url='http://example.com/'):
        t0 = time.process_time()
        while time.process_time() < t0 + 0.2:
            pass

    assert stats.burners['foo']['count'] == 2
    assert stats.burners['foo']['time'] > 0 and stats.burners['foo']['time'] < 0.3
    assert len(stats.burners['foo']['list']) == 1

    stats.update_cpu_burn('foo', 3, 3.0, set())
    assert stats.burners['foo']['count'] == 5
    assert stats.burners['foo']['time'] > 3.0 and stats.burners['foo']['time'] < 3.3
    assert len(stats.burners['foo']['list']) == 1

    stats.report()
开发者ID:cocrawler,项目名称:cocrawler,代码行数:25,代码来源:test_stats.py


示例10: evaluate

    def evaluate(self, data, labels, sess=None):
        """
        Runs one evaluation against the full epoch of data.
        Return the precision and the number of correct predictions.
        Batch evaluation saves memory and enables this to run on smaller GPUs.

        sess: the session in which the model has been trained.
        op: the Tensor that returns the number of correct predictions.
        data: size N x M
            N: number of signals (samples)
            M: number of vertices (features)
        labels: size N
            N: number of signals (samples)
        """
        t_process, t_wall = time.process_time(), time.time()
        predictions, loss = self.predict(data, labels, sess)
        #print(predictions)
        ncorrects = sum(predictions == labels)
        accuracy = 100 * sklearn.metrics.accuracy_score(labels, predictions)
        f1 = 100 * sklearn.metrics.f1_score(labels, predictions, average='weighted')
        string = 'accuracy: {:.2f} ({:d} / {:d}), f1 (weighted): {:.2f}, loss: {:.2e}'.format(
                accuracy, ncorrects, len(labels), f1, loss)
        if sess is None:
            string += '\ntime: {:.0f}s (wall {:.0f}s)'.format(time.process_time()-t_process, time.time()-t_wall)
        return string, accuracy, f1, loss
开发者ID:hyzcn,项目名称:cnn_graph,代码行数:25,代码来源:models.py


示例11: get_time_array

def get_time_array():
    list_len = []
    time_used_ratio = []

    for n in a:
        list_len.append(n)
        l = np.array(range(n)) # create a array
        time_sum_p = 0
        time_sum_foo = 0
        print(n)
        for i in range(1,50,1):
            random.shuffle(l) # randomize the list
            timeStamp_p = time.process_time() # get the current cpu time
            p(l, 0, len(l)) # run p function
            timeLapse_p = time.process_time() - timeStamp_p
            time_sum_p = time_sum_p + timeLapse_p
            
        for b in range(1,25,1):
            random.shuffle(l)
            timeStamp_foo = time.process_time() # get the current cpu time
            foo(l,0,len(l)) # run p function
            timeLapse_foo = time.process_time() - timeStamp_foo
            time_sum_foo = time_sum_foo + timeLapse_foo
            
        time_ave_p = time_sum_p / 50
        time_ave_foo = time_sum_foo/25
        time_ave = time_ave_foo / time_ave_p
        time_used_ratio.append(time_ave)
    return [list_len, time_used_ratio]
开发者ID:xxiang13,项目名称:Courses,代码行数:29,代码来源:hw1.py


示例12: get_time_list

def get_time_list():
    list_len = []
    time_used_ratio = []

    for n in a:
        list_len.append(n)
        l = list(range(n)) # create a list with numbers 0 ... to n-1
        time_sum_p = 0
        time_sum_foo = 0
        print(n)
        for i in range(1,50,1): #run 50 times to get the average time of function p for each list length
            random.shuffle(l) # randomize the list
            timeStamp_p = time.process_time() # get the current cpu time
            p(l, 0, len(l)) # run p function
            timeLapse_p = time.process_time() - timeStamp_p
            time_sum_p = time_sum_p + timeLapse_p
            
        for b in range(1,25,1): #run 25 times to get the average time of function foo for each time length
            random.shuffle(l)
            timeStamp_foo = time.process_time() # get the current cpu time
            foo(l,0,len(l)) # run p function
            timeLapse_foo = time.process_time() - timeStamp_foo
            time_sum_foo = time_sum_foo + timeLapse_foo
            
        time_ave_p = time_sum_p / 50
        time_ave_foo = time_sum_foo/25
        time_ave = time_ave_foo / time_ave_p #get  the ratio of average time of p and foo
        time_used_ratio.append(time_ave)
    return [list_len, time_used_ratio]
开发者ID:xxiang13,项目名称:Courses,代码行数:29,代码来源:hw1.py


示例13: print_progress

def print_progress(iteration, total, start, prefix = '', suffix = '', decimals = 2, barLength = 100):
    """Call in a loop to create terminal progress bar
    @params:
        iteration   - Required  : current iteration (Int)
        total       - Required  : total iterations (Int)
        prefix      - Optional  : prefix string (Str)
        suffix      - Optional  : suffix string (Str)
    """
    filledLength    = int(round(barLength * iteration / float(total)))
    percents        = round(100.00 * (iteration / float(total)), decimals)
    bar             = '#' * filledLength + '-' * (barLength - filledLength)
    global metrics
    global START_TIME
    global speed
    if (time.process_time() - START_TIME) * 1000  > 5:
        START_TIME = time.process_time()
        speed           = round((iteration*8//(time.process_time() - start)//1024), decimals)
        metrics         = 'Kbps'
        if speed > 1024:
            speed = speed//1024
            metrics = 'Mbps'

    Sys.stdout.write('%s [%s] %s%s %s%s %s\r' % (prefix, bar, percents, '%', suffix, speed, metrics)),
    Sys.stdout.flush()
    if iteration == total:
        print("\n")
开发者ID:35359595,项目名称:pyfs,代码行数:26,代码来源:pyfs.py


示例14: main

def main():
    parser = argparse.ArgumentParser()

    parser.add_argument("dataset", help="Path to graph dataset (.gml format)")

    group = parser.add_mutually_exclusive_group(required=True)
    group.add_argument("-k", "--kgroups", type=int, help="Number of groups to generate. Random Sources")
    group.add_argument(
        "-s", "--sources", help="Shortest Path sources. Comma separeted. Ex: Brighton,Edinburgh", default=""
    )

    parser.add_argument("-v", "--verbose", action="store_true", help="Show all vertices value")
    parser.add_argument("-t", "--timeit", action="store_true", help="Print execution time of chosen method")
    parser.add_argument("-p", "--plot", action="store_true", help="Plot the graphs generated")

    args = parser.parse_args()
    args.sources = args.sources.split(",")

    graph = ssp_classification.SSPClassification(args.dataset)

    t = process_time()
    grouped_graph = graph.extract_ssp(args.sources, args.kgroups)
    elapsed_time = process_time() - t

    if args.timeit:
        print("Time: %.5f seconds" % elapsed_time)

    print("Groups formed:")
    for x in nx.connected_components(grouped_graph):
        print(x)

    if args.plot:
        ssp_classification.plot(graph.graph, 1, "Graph")
        ssp_classification.plot(grouped_graph, 2, "Grouped Graph")
        plt.show()
开发者ID:falcaopetri,项目名称:GraphTheoryAtUFSCar,代码行数:35,代码来源:main.py


示例15: align

    def align(self, parameters=None, anchor_pairs=None):

        # sanity checks
        if self.marker.code != 'ladder':
            raise RuntimeError('E: align() must be performed on ladder channel!')

        if parameters:
            self.scan( parameters )         # in case this channel hasn't been scanned

        ladder = self.fsa.panel.get_ladder()

        # prepare ladder qcfunc
        if 'qcfunc' not in ladder:
            ladder['qcfunc'] =  algo.generate_scoring_function(
                                            ladder['strict'], ladder['relax'] )

        start_time = time.process_time()
        result = algo.align_peaks(self, parameters, ladder, anchor_pairs)
        dpresult = result.dpresult
        fsa = self.fsa
        fsa.z = dpresult.z
        fsa.rss = dpresult.rss
        fsa.nladder = len(dpresult.sized_peaks)
        fsa.score = result.score
        fsa.duration = time.process_time() - start_time
        fsa.status = const.assaystatus.aligned
        fsa.ztranspose = dpresult.ztranspose

        #import pprint; pprint.pprint(dpresult.sized_peaks)
        #print(fsa.z)
        cout('O: Score %3.2f | %5.2f | %d/%d | %s | %5.1f | %s' %
            (fsa.score, fsa.rss, fsa.nladder, len(ladder['sizes']), result.method,
            fsa.duration, fsa.filename) )
开发者ID:edawine,项目名称:fatools,代码行数:33,代码来源:mixin2.py


示例16: inner

 def inner(self, *args, **kwargs):
     start_time = time.process_time()
     result = fun(self, *args, **kwargs)
     elapsed_sec = round(time.process_time() - start_time, 2)
     msg = self.function.__name__ if hasattr(self, 'function') else self.__class__.__name__
     click.secho('Finished {} in {} sec'.format(msg, elapsed_sec), fg='yellow')
     return result
开发者ID:mgawino,项目名称:coal-mines-ml,代码行数:7,代码来源:utils.py


示例17: log_avg_performance_naive

    def log_avg_performance_naive(self, dataarray):
        lag = int(Variable_Holder.max_taumeta / self.taumeta)
        etimenaive = np.zeros(self.num_estimations + 1, dtype=float)

        for k in range(0, self.num_estimations + 1):
            current_time = self.window_size + k * self.shift - 1
            assert (current_time < np.shape(dataarray)[1])
            t0 = process_time()
            data0 = dataarray[:, current_time - self.window_size + 1: (current_time + 1)]
            dataslice0 = []

            for i in range(0, self.num_trajectories):
                dataslice0.append(data0[i, :])
            if k == 0:
                # initialization - we found out that calling the count_matrix_coo2_mult function the first time results
                # in lower performance than for following calls - probably due to caching in the background. To avoid
                # this deviation, we call this function once - for starting the cache procedure.
                estimate_via_sliding_windows(data=dataslice0, num_states=Variable_Holder.num_states, initial=True,
                                             lag=lag)

            C0 = estimate_via_sliding_windows(data=dataslice0, num_states=Variable_Holder.num_states,
                                              initial=True)  # count matrix for whole window
            A0 = _tm(C0)
            etimenaive[k] = t0 - process_time()
        log_total_time_naive = Utility.log_value(etimenaive[-1])
        return log_total_time_naive
开发者ID:alexlafleur,项目名称:LDStreamHMMLearn,代码行数:26,代码来源:util_evaluation_mm_naive_only.py


示例18: plan

    def plan(self):
        """
        Compute the cost grid based on the map represented in the occupancy grid.

        :return: none
        """

        self.total_plan_steps += 1
        start_time = time.process_time()

        self.compute_shortest_path()

        x = int(self.robot.get_cell_x())
        y = int(self.robot.get_cell_y())

        # When there has been a change to the plan rebuild the path.
        self.path = []

        try:
            self.build_path(x, y)
        except RuntimeError as err:
            if str(err) == "maximum recursion depth exceeded in comparison":
                raise NoPathException("No path to Goal!")

        self.time_taken += round(time.process_time() - start_time, 3)
开发者ID:swordmaster2k,项目名称:botnav,代码行数:25,代码来源:gridnav_star.py


示例19: _schedule_processes

    def _schedule_processes(self, tasklist, _worker):
        # Reset the global flag that allows
        global _stop_all_processes
        _subprocess_container.stop_all = False
        # Make a shallow copy of the task list,
        # so we don't mess with the callers list.
        tasklist = copy.copy(tasklist)
        number_tasks = len(tasklist)
        if number_tasks == 0:
            totaltime = 0
            return totaltime
        use_threading = number_tasks > 1 and self.num_processes > 1
        starttime = time.process_time()
        task_queue = Queue()
        pbar = _ProgressBar(number_tasks, self.silent)
        pbar.animate(0)
        processed_tasks = []
        n_errors = 0
        threads = []
        try:
            # run while there is still threads, tasks or stuff in the queue
            # to process
            while threads or tasklist or task_queue.qsize():
                # if we aren't using all the processors AND there is still
                # data left to compute, then spawn another thread
                if (len(threads) < self.num_processes) and tasklist:
                    if use_threading:
                        t = Thread(
                            target=_worker, args=tuple([tasklist.pop(0), task_queue])
                        )
                        t.daemon = True
                        t.start()
                        threads.append(t)
                    else:
                        _worker(tasklist.pop(0), task_queue)
                else:
                    # In the case that we have the maximum number
                    # of running threads or we run out tasks.
                    # Check if any of them are done
                    for thread in threads:
                        if not thread.isAlive():
                            threads.remove(thread)
                while task_queue.qsize():
                    task = task_queue.get()
                    if task.has_error():
                        n_errors += 1
                    self.summery.task_summery(task)
                    processed_tasks.append(task)
                    pbar.animate(len(processed_tasks), n_errors)

                time.sleep(0.01)
        except KeyboardInterrupt:
            _display("Processing interrupted")
            _subprocess_container.stop_all = True
            # Add a small delay here. It allows the user to press ctrl-c twice
            # to escape this try-catch. This is usefull when if the code is
            # run in an outer loop which we want to excape as well.
            time.sleep(1)
        totaltime = time.process_time() - starttime
        return totaltime
开发者ID:AnyBody-Research-Group,项目名称:AnyPyTools,代码行数:60,代码来源:abcutils.py


示例20: solve

def solve(impl='python'):
    if impl == 'cython':
        solvercls = csolver.CBruteSolver
    else:
        solvercls = solver.BruteSolver
    try:
        os.mkdir('data/' + impl)
    except FileExistsError:
        pass
    for filename in sorted(glob.glob('data/*.inst.dat')):
        print(filename)
        loaded_data = list(dataloader.load_input(filename))
        count = loaded_data[0]['count']
        correct = list(dataloader.load_provided_results(
            'data/knap_{0:02d}.sol.dat'.format(count)))
        outname = filename.replace('.inst.dat', '.results.jsons')
        outname = outname.replace('data/', 'data/' + impl + '/')
        with open(outname, 'w') as f:
            filestartime = time.process_time()
            for idx, backpack in enumerate(loaded_data):
                startime = time.process_time()
                s = solvercls(backpack)
                backpack['maxcombo'], backpack['maxcost'] = s.solve()
                endtime = time.process_time()
                delta = endtime - startime
                backpack['time'] = delta
                assert backpack['maxcost'] == correct[idx]['maxcost']
                del backpack['items']
                f.write(json.dumps(backpack) + '\n')
            fileendtime = time.process_time()
            delta = fileendtime - filestartime
            f.write('{}\n'.format(delta))
开发者ID:hroncok,项目名称:cython-workshop,代码行数:32,代码来源:__init__.py



注:本文中的time.process_time函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python time.replace函数代码示例发布时间:2022-05-27
下一篇:
Python time.perf_counter函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap