• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Python util.debug函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中multiprocessing.util.debug函数的典型用法代码示例。如果您正苦于以下问题:Python debug函数的具体用法?Python debug怎么用?Python debug使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了debug函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: _handle_workers

 def _handle_workers(pool):
     while pool._worker_handler._state == RUN and pool._state == RUN:
         pool._maintain_pool()
         time.sleep(0.1)
     # send sentinel to stop workers
     pool._taskqueue.put(None)
     debug('worker handler exiting')
开发者ID:1018365842,项目名称:FreeIMU,代码行数:7,代码来源:pool.py


示例2: cancel_join_thread

 def cancel_join_thread(self):
     debug('Queue.cancel_join_thread()')
     self._joincancelled = True
     try:
         self._jointhread.cancel()
     except AttributeError:
         pass
开发者ID:ChowZenki,项目名称:kbengine,代码行数:7,代码来源:queues.py


示例3: _join_exited_workers

    def _join_exited_workers(self, lost_worker_timeout=10.0):
        """Cleanup after any worker processes which have exited due to
        reaching their specified lifetime. Returns True if any workers were
        cleaned up.
        """
        now = None
        # The worker may have published a result before being terminated,
        # but we have no way to accurately tell if it did.  So we wait for
        # 10 seconds before we mark the job with WorkerLostError.
        for job in [job for job in self._cache.values()
                if not job.ready() and job._worker_lost]:
            now = now or time.time()
            if now - job._worker_lost > lost_worker_timeout:
                err = WorkerLostError("Worker exited prematurely.")
                job._set(None, (False, err))

        cleaned = []
        for i in reversed(range(len(self._pool))):
            worker = self._pool[i]
            if worker.exitcode is not None:
                # worker exited
                debug('cleaning up worker %d' % i)
                worker.join()
                cleaned.append(worker.pid)
                del self._pool[i]
        if cleaned:
            for job in self._cache.values():
                for worker_pid in job.worker_pids():
                    if worker_pid in cleaned and not job.ready():
                        if self._putlock is not None:
                            self._putlock.release()
                        job._worker_lost = time.time()
                        continue
            return True
        return False
开发者ID:pcardune,项目名称:celery,代码行数:35,代码来源:pool.py


示例4: _join_exited_workers

 def _join_exited_workers(self):
     """Cleanup after any worker processes which have exited due to reaching
     their specified lifetime.  Returns True if any workers were cleaned up.
     """
     cleaned = False
     for i in reversed(range(len(self._pool))):
         worker = self._pool[i]
         if worker.exitcode is not None:
             # worker exited
             try:
                 worker.join()
             except RuntimeError:
                 #
                 # RuntimeError: cannot join thread before it is started
                 #
                 # This is a race condition in DaemonProcess.start() which was found
                 # during some of the test scans I run. The race condition exists
                 # because we're using Threads for a Pool that was designed to be
                 # used with real processes: thus there is no worker.exitcode,
                 # thus it has to be simulated in a race condition-prone way.
                 #
                 continue
             else:
                 debug('cleaning up worker %d' % i)
             cleaned = True
             del self._pool[i]
     return cleaned
开发者ID:andresriancho,项目名称:w3af,代码行数:27,代码来源:threadpool.py


示例5: _feed

    def _feed(buffer, notempty, send, writelock, close, ignore_epipe):
        debug('starting thread to feed data to pipe')
        from .util import is_exiting

        nacquire = notempty.acquire
        nrelease = notempty.release
        nwait = notempty.wait
        bpopleft = buffer.popleft
        sentinel = _sentinel
        if sys.platform != 'win32':
            wacquire = writelock.acquire
            wrelease = writelock.release
        else:
            wacquire = None

        try:
            while 1:
                nacquire()
                try:
                    if not buffer:
                        nwait()
                finally:
                    nrelease()
                try:
                    while 1:
                        obj = bpopleft()
                        if obj is sentinel:
                            debug('feeder thread got sentinel -- exiting')
                            close()
                            return

                        if wacquire is None:
                            send(obj)
                            # Delete references to object. See issue16284
                            del obj
                        else:
                            wacquire()
                            try:
                                send(obj)
                                # Delete references to object. See issue16284
                                del obj
                            finally:
                                wrelease()
                except IndexError:
                    pass
        except Exception as e:
            if ignore_epipe and getattr(e, 'errno', 0) == errno.EPIPE:
                return
            # Since this runs in a daemon thread the resources it uses
            # may be become unusable while the process is cleaning up.
            # We ignore errors which happen after the process has
            # started to cleanup.
            try:
                if is_exiting():
                    info('error in queue thread: %s', e)
                else:
                    import traceback
                    traceback.print_exc()
            except Exception:
                pass
开发者ID:Anzumana,项目名称:cpython,代码行数:60,代码来源:queues.py


示例6: worker

def worker(inqueue, outqueue, initializer=None, initargs=()):
    put = outqueue.put
    get = inqueue.get
    if hasattr(inqueue, '_writer'):
        inqueue._writer.close()
        outqueue._reader.close()

    if initializer is not None:
        initializer(*initargs)

    while 1:
        try:
            task = get()
        except (EOFError, IOError):
            debug('worker got EOFError or IOError -- exiting')
            break

        if task is None:
            debug('worker got sentinel -- exiting')
            break

        job, i, func, args, kwds = task
        try:
            result = (True, func(*args, **kwds))
        except Exception as e:
            result = (False, e)
        put((job, i, result))
开发者ID:Kanma,项目名称:Athena-Dependencies-Python,代码行数:27,代码来源:pool.py


示例7: _join_exited_workers

 def _join_exited_workers(self):
     """Cleanup after any worker processes which have exited due to
     reaching their specified lifetime. Returns True if any workers were
     cleaned up.
     """
     cleaned = []
     for i in reversed(range(len(self._pool))):
         worker = self._pool[i]
         if worker.exitcode is not None:
             # worker exited
             debug('cleaning up worker %d' % i)
             worker.join()
             cleaned.append(worker.pid)
             del self._pool[i]
     if cleaned:
         for job in self._cache.values():
             for worker_pid in job.worker_pids():
                 if worker_pid in cleaned and not job.ready():
                     if self._putlock is not None:
                         try:
                             self._putlock.release()
                         except Exception:
                             pass
                     err = WorkerLostError("Worker exited prematurely.")
                     job._set(None, (False, err))
                     continue
         return True
     return False
开发者ID:aleszoulek,项目名称:celery,代码行数:28,代码来源:pool.py


示例8: close

 def close(self):
     debug('closing pool')
     if self._state == RUN:
         self._state = CLOSE
         self._worker_handler.close()
         self._worker_handler.join()
         self._taskqueue.put(None)
开发者ID:aleszoulek,项目名称:celery,代码行数:7,代码来源:pool.py


示例9: join

 def join(self):
     debug('joining pool')
     assert self._state in (CLOSE, TERMINATE)
     self._task_handler.join()
     self._result_handler.join()
     for p in self._pool:
         p.join()
开发者ID:Kanma,项目名称:Athena-Dependencies-Python,代码行数:7,代码来源:pool.py


示例10: __init__

    def __init__(self, kind, value, maxvalue):
        # unlink_now is only used on win32 or when we are using fork.
        unlink_now = False
        for i in range(100):
            try:
                self._semlock = _SemLock(
                    kind, value, maxvalue, SemLock._make_name(),
                    unlink_now)
            except FileExistsError:  # pragma: no cover
                pass
            else:
                break
        else:  # pragma: no cover
            raise FileExistsError('cannot find name for semaphore')

        util.debug('created semlock with handle %s and name "%s"'
                   % (self._semlock.handle, self._semlock.name))

        self._make_methods()

        def _after_fork(obj):
            obj._semlock._after_fork()

        util.register_after_fork(self, _after_fork)

        # When the object is garbage collected or the
        # process shuts down we unlink the semaphore name
        semaphore_tracker.register(self._semlock.name)
        util.Finalize(self, SemLock._cleanup, (self._semlock.name,),
                      exitpriority=0)
开发者ID:ELVIS-Project,项目名称:music21,代码行数:30,代码来源:synchronize.py


示例11: SocketClient

def SocketClient(address):
    '''
    Return a connection object connected to the socket given by `address`
    '''
    family = address_type(address)
    with socket.socket( getattr(socket, family) ) as s:
        s.setblocking(True)
        t = _init_timeout()

        while 1:
            try:
                s.connect(address)
            except socket.error as e:
                if e.args[0] != errno.ECONNREFUSED or _check_timeout(t):
                    debug('failed to connect to address %s', address)
                    raise
                time.sleep(0.01)
            else:
                break
        else:
            raise

        fd = duplicate(s.fileno())
    conn = _multiprocessing.Connection(fd)
    return conn
开发者ID:7modelsan,项目名称:kbengine,代码行数:25,代码来源:connection.py


示例12: worker

def worker(inqueue, outqueue, initializer=None, initargs=(), maxtasks=None):
    assert maxtasks is None or (type(maxtasks) == int and maxtasks > 0)
    put = outqueue.put
    get = inqueue.get
    if hasattr(inqueue, '_writer'):
        inqueue._writer.close()
        outqueue._reader.close()

    if initializer is not None:
        initializer(*initargs)

    completed = 0
    while maxtasks is None or (maxtasks and completed < maxtasks):
        try:
            task = get()
        except (EOFError, IOError):
            debug('worker got EOFError or IOError -- exiting')
            break

        if task is None:
            debug('worker got sentinel -- exiting')
            break

        job, i, func, args, kwds = task
        try:
            result = (True, func(*args, **kwds))
        except Exception, e:
            result = (False, e)

        try:
            put((job, i, result))
        except Exception as e:
            wrapped = create_detailed_pickling_error(e, result[1])
            put((job, i, (False, wrapped)))
        completed += 1
开发者ID:BioSoundSystems,项目名称:w3af,代码行数:35,代码来源:pool276.py


示例13: create

    def create(self, c, typeid, *args, **kwds):
        """
        Create a new shared object and return its id
        """
        self.mutex.acquire()
        try:
            callable, exposed, method_to_typeid, proxytype = self.registry[typeid]
            if callable is None:
                if not (len(args) == 1 and not kwds):
                    raise AssertionError
                    obj = args[0]
                else:
                    obj = callable(*args, **kwds)
                exposed = exposed is None and public_methods(obj)
            if method_to_typeid is not None:
                if not type(method_to_typeid) is dict:
                    raise AssertionError
                    exposed = list(exposed) + list(method_to_typeid)
                ident = '%x' % id(obj)
                util.debug('%r callable returned object with id %r', typeid, ident)
                self.id_to_obj[ident] = (obj, set(exposed), method_to_typeid)
                self.id_to_refcount[ident] = ident not in self.id_to_refcount and 0
            self.incref(c, ident)
            return (ident, tuple(exposed))
        finally:
            self.mutex.release()

        return
开发者ID:webiumsk,项目名称:WOT-0.9.15.1,代码行数:28,代码来源:managers.py


示例14: _help_stuff_finish

 def _help_stuff_finish(inqueue, task_handler, size):
     # task_handler may be blocked trying to put items on inqueue
     debug('removing tasks from inqueue until task handler finished')
     inqueue._rlock.acquire()
     while task_handler.is_alive() and inqueue._reader.poll():
         inqueue._reader.recv()
         time.sleep(0)
开发者ID:carriercomm,项目名称:w3af_analyse,代码行数:7,代码来源:pool276.py


示例15: worker

def worker(inqueue, outqueue, initializer=None, initargs=(), maxtasks=None):
    pid = os.getpid()
    assert maxtasks is None or (type(maxtasks) == int and maxtasks > 0)
    put = outqueue.put
    get = inqueue.get

    if hasattr(inqueue, '_reader'):

        def poll(timeout):
            if inqueue._reader.poll(timeout):
                return True, get()
            return False, None
    else:

        def poll(timeout):  # noqa
            try:
                return True, get(timeout=timeout)
            except Queue.Empty:
                return False, None

    if hasattr(inqueue, '_writer'):
        inqueue._writer.close()
        outqueue._reader.close()

    if initializer is not None:
        initializer(*initargs)

    if SIG_SOFT_TIMEOUT is not None:
        signal.signal(SIG_SOFT_TIMEOUT, soft_timeout_sighandler)

    completed = 0
    while maxtasks is None or (maxtasks and completed < maxtasks):
        try:
            ready, task = poll(1.0)
            if not ready:
                continue
        except (EOFError, IOError):
            debug('worker got EOFError or IOError -- exiting')
            break

        if task is None:
            debug('worker got sentinel -- exiting')
            break

        job, i, func, args, kwds = task
        put((ACK, (job, i, time.time(), pid)))
        try:
            result = (True, func(*args, **kwds))
        except Exception:
            result = (False, ExceptionInfo(sys.exc_info()))
        try:
            put((READY, (job, i, result)))
        except Exception, exc:
            _, _, tb = sys.exc_info()
            wrapped = MaybeEncodingError(exc, result[1])
            einfo = ExceptionInfo((MaybeEncodingError, wrapped, tb))
            put((READY, (job, i, (False, einfo))))

        completed += 1
开发者ID:harmv,项目名称:celery,代码行数:59,代码来源:pool.py


示例16: join

 def join(self):
     assert self._state in (CLOSE, TERMINATE)
     self._worker_handler.join()
     self._task_handler.join()
     self._result_handler.join()
     for p in self._pool:
         p.join()
     debug('after join()')
开发者ID:HonzaKral,项目名称:celery,代码行数:8,代码来源:pool.py


示例17: _repopulate_pool

 def _repopulate_pool(self):
     """Bring the number of pool processes up to the specified number,
     for use after reaping workers which have exited.
     """
     debug('repopulating pool')
     for i in range(self._processes - len(self._pool)):
         self._create_worker_process()
         debug('added worker')
开发者ID:HonzaKral,项目名称:celery,代码行数:8,代码来源:pool.py


示例18: _finalize_close

 def _finalize_close(buffer, notempty):
     debug('telling queue thread to quit')
     notempty.acquire()
     try:
         buffer.append(_sentinel)
         notempty.notify()
     finally:
         notempty.release()
开发者ID:ChowZenki,项目名称:kbengine,代码行数:8,代码来源:queues.py


示例19: join

 def join(self):
     debug('joining pool')
     raise self._state in (CLOSE, TERMINATE) or AssertionError
     self._worker_handler.join()
     self._task_handler.join()
     self._result_handler.join()
     for p in self._pool:
         p.join()
开发者ID:webiumsk,项目名称:WOT-0.9.12,代码行数:8,代码来源:pool.py


示例20: _feed

    def _feed(buffer, notempty, send_bytes, writelock, close, reducers,
              ignore_epipe, onerror, queue_sem):
        util.debug('starting thread to feed data to pipe')
        nacquire = notempty.acquire
        nrelease = notempty.release
        nwait = notempty.wait
        bpopleft = buffer.popleft
        sentinel = _sentinel
        if sys.platform != 'win32':
            wacquire = writelock.acquire
            wrelease = writelock.release
        else:
            wacquire = None

        while 1:
            try:
                nacquire()
                try:
                    if not buffer:
                        nwait()
                finally:
                    nrelease()
                try:
                    while 1:
                        obj = bpopleft()
                        if obj is sentinel:
                            util.debug('feeder thread got sentinel -- exiting')
                            close()
                            return

                        # serialize the data before acquiring the lock
                        obj_ = CustomizableLokyPickler.dumps(
                            obj, reducers=reducers)
                        if wacquire is None:
                            send_bytes(obj_)
                        else:
                            wacquire()
                            try:
                                send_bytes(obj_)
                            finally:
                                wrelease()
                        # Remove references early to avoid leaking memory
                        del obj, obj_
                except IndexError:
                    pass
            except BaseException as e:
                if ignore_epipe and getattr(e, 'errno', 0) == errno.EPIPE:
                    return
                # Since this runs in a daemon thread the resources it uses
                # may be become unusable while the process is cleaning up.
                # We ignore errors which happen after the process has
                # started to cleanup.
                if util.is_exiting():
                    util.info('error in queue thread: %s', e)
                    return
                else:
                    queue_sem.release()
                    onerror(e, obj)
开发者ID:MartinThoma,项目名称:scikit-learn,代码行数:58,代码来源:queues.py



注:本文中的multiprocessing.util.debug函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python util.get_logger函数代码示例发布时间:2022-05-27
下一篇:
Python synchronize.Semaphore类代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap