• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Python logcontext.run_in_background函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中synapse.util.logcontext.run_in_background函数的典型用法代码示例。如果您正苦于以下问题:Python run_in_background函数的具体用法?Python run_in_background怎么用?Python run_in_background使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了run_in_background函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: start_purge_history

    def start_purge_history(self, room_id, token,
                            delete_local_events=False):
        """Start off a history purge on a room.

        Args:
            room_id (str): The room to purge from

            token (str): topological token to delete events before
            delete_local_events (bool): True to delete local events as well as
                remote ones

        Returns:
            str: unique ID for this purge transaction.
        """
        if room_id in self._purges_in_progress_by_room:
            raise SynapseError(
                400,
                "History purge already in progress for %s" % (room_id, ),
            )

        purge_id = random_string(16)

        # we log the purge_id here so that it can be tied back to the
        # request id in the log lines.
        logger.info("[purge] starting purge_id %s", purge_id)

        self._purges_by_id[purge_id] = PurgeStatus()
        run_in_background(
            self._purge_history,
            purge_id, room_id, token, delete_local_events,
        )
        return purge_id
开发者ID:DoubleMalt,项目名称:synapse,代码行数:32,代码来源:pagination.py


示例2: verify_json_objects_for_server

    def verify_json_objects_for_server(self, server_and_json):
        """Bulk verifies signatures of json objects, bulk fetching keys as
        necessary.

        Args:
            server_and_json (list): List of pairs of (server_name, json_object)

        Returns:
            List<Deferred>: for each input pair, a deferred indicating success
                or failure to verify each json object's signature for the given
                server_name. The deferreds run their callbacks in the sentinel
                logcontext.
        """
        # a list of VerifyKeyRequests
        verify_requests = []
        handle = preserve_fn(_handle_key_deferred)

        def process(server_name, json_object):
            """Process an entry in the request list

            Given a (server_name, json_object) pair from the request list,
            adds a key request to verify_requests, and returns a deferred which will
            complete or fail (in the sentinel context) when verification completes.
            """
            key_ids = signature_ids(json_object, server_name)

            if not key_ids:
                return defer.fail(
                    SynapseError(
                        400,
                        "Not signed by %s" % (server_name,),
                        Codes.UNAUTHORIZED,
                    )
                )

            logger.debug("Verifying for %s with key_ids %s",
                         server_name, key_ids)

            # add the key request to the queue, but don't start it off yet.
            verify_request = VerifyKeyRequest(
                server_name, key_ids, json_object, defer.Deferred(),
            )
            verify_requests.append(verify_request)

            # now run _handle_key_deferred, which will wait for the key request
            # to complete and then do the verification.
            #
            # We want _handle_key_request to log to the right context, so we
            # wrap it with preserve_fn (aka run_in_background)
            return handle(verify_request)

        results = [
            process(server_name, json_object)
            for server_name, json_object in server_and_json
        ]

        if verify_requests:
            run_in_background(self._start_key_lookups, verify_requests)

        return results
开发者ID:matrix-org,项目名称:synapse,代码行数:60,代码来源:keyring.py


示例3: _push_update

    def _push_update(self, member, typing):
        if self.hs.is_mine_id(member.user_id):
            # Only send updates for changes to our own users.
            run_in_background(self._push_remote, member, typing)

        self._push_update_local(
            member=member,
            typing=typing
        )
开发者ID:rubo77,项目名称:synapse,代码行数:9,代码来源:typing.py


示例4: _start_user_parting

    def _start_user_parting(self):
        """
        Start the process that goes through the table of users
        pending deactivation, if it isn't already running.

        Returns:
            None
        """
        if not self._user_parter_running:
            run_in_background(self._user_parter_loop)
开发者ID:rubo77,项目名称:synapse,代码行数:10,代码来源:deactivate_account.py


示例5: process_replication_rows

    def process_replication_rows(self, stream_name, token, rows):
        # The federation stream contains things that we want to send out, e.g.
        # presence, typing, etc.
        if stream_name == "federation":
            send_queue.process_rows_for_federation(self.federation_sender, rows)
            run_in_background(self.update_token, token)

        # We also need to poke the federation sender when new events happen
        elif stream_name == "events":
            self.federation_sender.notify_new_events(token)
开发者ID:rubo77,项目名称:synapse,代码行数:10,代码来源:federation_sender.py


示例6: authenticate_request

    def authenticate_request(self, request, content):
        json_request = {
            "method": request.method.decode('ascii'),
            "uri": request.uri.decode('ascii'),
            "destination": self.server_name,
            "signatures": {},
        }

        if content is not None:
            json_request["content"] = content

        origin = None

        auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")

        if not auth_headers:
            raise NoAuthenticationError(
                401, "Missing Authorization headers", Codes.UNAUTHORIZED,
            )

        for auth in auth_headers:
            if auth.startswith(b"X-Matrix"):
                (origin, key, sig) = _parse_auth_header(auth)
                json_request["origin"] = origin
                json_request["signatures"].setdefault(origin, {})[key] = sig

        if (
            self.federation_domain_whitelist is not None and
            origin not in self.federation_domain_whitelist
        ):
            raise FederationDeniedError(origin)

        if not json_request["signatures"]:
            raise NoAuthenticationError(
                401, "Missing Authorization headers", Codes.UNAUTHORIZED,
            )

        yield self.keyring.verify_json_for_server(origin, json_request)

        logger.info("Request from %s", origin)
        request.authenticated_entity = origin

        # If we get a valid signed request from the other side, its probably
        # alive
        retry_timings = yield self.store.get_destination_retry_timings(origin)
        if retry_timings and retry_timings["retry_last_ts"]:
            run_in_background(self._reset_retry_timings, origin)

        defer.returnValue(origin)
开发者ID:matrix-org,项目名称:synapse,代码行数:49,代码来源:server.py


示例7: _renew_attestations

    def _renew_attestations(self):
        """Called periodically to check if we need to update any of our attestations
        """

        now = self.clock.time_msec()

        rows = yield self.store.get_attestations_need_renewals(
            now + UPDATE_ATTESTATION_TIME_MS
        )

        @defer.inlineCallbacks
        def _renew_attestation(group_id, user_id):
            try:
                if not self.is_mine_id(group_id):
                    destination = get_domain_from_id(group_id)
                elif not self.is_mine_id(user_id):
                    destination = get_domain_from_id(user_id)
                else:
                    logger.warn(
                        "Incorrectly trying to do attestations for user: %r in %r",
                        user_id, group_id,
                    )
                    yield self.store.remove_attestation_renewal(group_id, user_id)
                    return

                attestation = self.attestations.create_attestation(group_id, user_id)

                yield self.transport_client.renew_group_attestation(
                    destination, group_id, user_id,
                    content={"attestation": attestation},
                )

                yield self.store.update_attestation_renewal(
                    group_id, user_id, attestation
                )
            except RequestSendFailed as e:
                logger.warning(
                    "Failed to renew attestation of %r in %r: %s",
                    user_id, group_id, e,
                )
            except Exception:
                logger.exception("Error renewing attestation of %r in %r",
                                 user_id, group_id)

        for row in rows:
            group_id = row["group_id"]
            user_id = row["user_id"]

            run_in_background(_renew_attestation, group_id, user_id)
开发者ID:matrix-org,项目名称:synapse,代码行数:49,代码来源:attestations.py


示例8: get_keys_from_perspectives

    def get_keys_from_perspectives(self, server_name_and_key_ids):
        @defer.inlineCallbacks
        def get_key(perspective_name, perspective_keys):
            try:
                result = yield self.get_server_verify_key_v2_indirect(
                    server_name_and_key_ids, perspective_name, perspective_keys
                )
                defer.returnValue(result)
            except KeyLookupError as e:
                logger.warning(
                    "Key lookup failed from %r: %s", perspective_name, e,
                )
            except Exception as e:
                logger.exception(
                    "Unable to get key from %r: %s %s",
                    perspective_name,
                    type(e).__name__, str(e),
                )

            defer.returnValue({})

        results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
            [
                run_in_background(get_key, p_name, p_keys)
                for p_name, p_keys in self.perspective_servers.items()
            ],
            consumeErrors=True,
        ).addErrback(unwrapFirstError))

        union_of_keys = {}
        for result in results:
            for server_name, keys in result.items():
                union_of_keys.setdefault(server_name, {}).update(keys)

        defer.returnValue(union_of_keys)
开发者ID:matrix-org,项目名称:synapse,代码行数:35,代码来源:keyring.py


示例9: fire

    def fire(self, *args, **kwargs):
        """Invokes every callable in the observer list, passing in the args and
        kwargs. Exceptions thrown by observers are logged but ignored. It is
        not an error to fire a signal with no observers.

        Returns a Deferred that will complete when all the observers have
        completed."""

        def do(observer):
            def eb(failure):
                logger.warning(
                    "%s signal observer %s failed: %r",
                    self.name, observer, failure,
                    exc_info=(
                        failure.type,
                        failure.value,
                        failure.getTracebackObject()))

            return defer.maybeDeferred(observer, *args, **kwargs).addErrback(eb)

        deferreds = [
            run_in_background(do, o)
            for o in self.observers
        ]

        return make_deferred_yieldable(defer.gatherResults(
            deferreds, consumeErrors=True,
        ))
开发者ID:DoubleMalt,项目名称:synapse,代码行数:28,代码来源:distributor.py


示例10: fetch_or_execute

    def fetch_or_execute(self, txn_key, fn, *args, **kwargs):
        """Fetches the response for this transaction, or executes the given function
        to produce a response for this transaction.

        Args:
            txn_key (str): A key to ensure idempotency should fetch_or_execute be
            called again at a later point in time.
            fn (function): A function which returns a tuple of
            (response_code, response_dict).
            *args: Arguments to pass to fn.
            **kwargs: Keyword arguments to pass to fn.
        Returns:
            Deferred which resolves to a tuple of (response_code, response_dict).
        """
        if txn_key in self.transactions:
            observable = self.transactions[txn_key][0]
        else:
            # execute the function instead.
            deferred = run_in_background(fn, *args, **kwargs)

            observable = ObservableDeferred(deferred)
            self.transactions[txn_key] = (observable, self.clock.time_msec())

            # if the request fails with an exception, remove it
            # from the transaction map. This is done to ensure that we don't
            # cache transient errors like rate-limiting errors, etc.
            def remove_from_map(err):
                self.transactions.pop(txn_key, None)
                # we deliberately do not propagate the error any further, as we
                # expect the observers to have reported it.

            deferred.addErrback(remove_from_map)

        return make_deferred_yieldable(observable.observe())
开发者ID:DoubleMalt,项目名称:synapse,代码行数:34,代码来源:transactions.py


示例11: _on_new_room_event

    def _on_new_room_event(self, event, room_stream_id, extra_users=[]):
        """Notify any user streams that are interested in this room event"""
        # poke any interested application service.
        run_in_background(self._notify_app_services, room_stream_id)

        if self.federation_sender:
            self.federation_sender.notify_new_events(room_stream_id)

        if event.type == EventTypes.Member and event.membership == Membership.JOIN:
            self._user_joined_room(event.state_key, event.room_id)

        self.on_new_event(
            "room_key", room_stream_id,
            users=extra_users,
            rooms=[event.room_id],
        )
开发者ID:rubo77,项目名称:synapse,代码行数:16,代码来源:notifier.py


示例12: send

 def send(self, service, events):
     try:
         txn = yield self.store.create_appservice_txn(
             service=service,
             events=events
         )
         service_is_up = yield self._is_service_up(service)
         if service_is_up:
             sent = yield txn.send(self.as_api)
             if sent:
                 yield txn.complete(self.store)
             else:
                 run_in_background(self._start_recoverer, service)
     except Exception:
         logger.exception("Error creating appservice transaction")
         run_in_background(self._start_recoverer, service)
开发者ID:rubo77,项目名称:synapse,代码行数:16,代码来源:scheduler.py


示例13: on_new_receipts

    def on_new_receipts(self, min_stream_id, max_stream_id, affected_room_ids):
        yield run_on_reactor()
        try:
            # Need to subtract 1 from the minimum because the lower bound here
            # is not inclusive
            updated_receipts = yield self.store.get_all_updated_receipts(
                min_stream_id - 1, max_stream_id
            )
            # This returns a tuple, user_id is at index 3
            users_affected = set([r[3] for r in updated_receipts])

            deferreds = []

            for u in users_affected:
                if u in self.pushers:
                    for p in self.pushers[u].values():
                        deferreds.append(
                            run_in_background(
                                p.on_new_receipts,
                                min_stream_id, max_stream_id,
                            )
                        )

            yield make_deferred_yieldable(
                defer.gatherResults(deferreds, consumeErrors=True),
            )
        except Exception:
            logger.exception("Exception in pusher on_new_receipts")
开发者ID:rubo77,项目名称:synapse,代码行数:28,代码来源:pusherpool.py


示例14: _test_run_in_background

    def _test_run_in_background(self, function):
        sentinel_context = LoggingContext.current_context()

        callback_completed = [False]

        def test():
            context_one.request = "one"
            d = function()

            def cb(res):
                self._check_test_key("one")
                callback_completed[0] = True
                return res
            d.addCallback(cb)

            return d

        with LoggingContext() as context_one:
            context_one.request = "one"

            # fire off function, but don't wait on it.
            logcontext.run_in_background(test)

            self._check_test_key("one")

        # now wait for the function under test to have run, and check that
        # the logcontext is left in a sane state.
        d2 = defer.Deferred()

        def check_logcontext():
            if not callback_completed[0]:
                reactor.callLater(0.01, check_logcontext)
                return

            # make sure that the context was reset before it got thrown back
            # into the reactor
            try:
                self.assertIs(LoggingContext.current_context(),
                              sentinel_context)
                d2.callback(None)
            except BaseException:
                d2.errback(twisted.python.failure.Failure())

        reactor.callLater(0.01, check_logcontext)

        # test is done once d2 finishes
        return d2
开发者ID:rubo77,项目名称:synapse,代码行数:47,代码来源:test_logcontext.py


示例15: verify_json_objects_for_server

    def verify_json_objects_for_server(self, server_and_json):
        """Bulk verifies signatures of json objects, bulk fetching keys as
        necessary.

        Args:
            server_and_json (list): List of pairs of (server_name, json_object)

        Returns:
            List<Deferred>: for each input pair, a deferred indicating success
                or failure to verify each json object's signature for the given
                server_name. The deferreds run their callbacks in the sentinel
                logcontext.
        """
        verify_requests = []

        for server_name, json_object in server_and_json:

            key_ids = signature_ids(json_object, server_name)
            if not key_ids:
                logger.warn("Request from %s: no supported signature keys",
                            server_name)
                deferred = defer.fail(SynapseError(
                    400,
                    "Not signed with a supported algorithm",
                    Codes.UNAUTHORIZED,
                ))
            else:
                deferred = defer.Deferred()

            logger.debug("Verifying for %s with key_ids %s",
                         server_name, key_ids)

            verify_request = VerifyKeyRequest(
                server_name, key_ids, json_object, deferred
            )

            verify_requests.append(verify_request)

        run_in_background(self._start_key_lookups, verify_requests)

        # Pass those keys to handle_key_deferred so that the json object
        # signatures can be verified
        handle = preserve_fn(_handle_key_deferred)
        return [
            handle(rq) for rq in verify_requests
        ]
开发者ID:DoubleMalt,项目名称:synapse,代码行数:46,代码来源:keyring.py


示例16: _handle_timeouts

    def _handle_timeouts(self):
        """Checks the presence of users that have timed out and updates as
        appropriate.
        """
        logger.info("Handling presence timeouts")
        now = self.clock.time_msec()

        try:
            with Measure(self.clock, "presence_handle_timeouts"):
                # Fetch the list of users that *may* have timed out. Things may have
                # changed since the timeout was set, so we won't necessarily have to
                # take any action.
                users_to_check = set(self.wheel_timer.fetch(now))

                # Check whether the lists of syncing processes from an external
                # process have expired.
                expired_process_ids = [
                    process_id for process_id, last_update
                    in self.external_process_last_updated_ms.items()
                    if now - last_update > EXTERNAL_PROCESS_EXPIRY
                ]
                for process_id in expired_process_ids:
                    users_to_check.update(
                        self.external_process_last_updated_ms.pop(process_id, ())
                    )
                    self.external_process_last_update.pop(process_id)

                states = [
                    self.user_to_current_state.get(
                        user_id, UserPresenceState.default(user_id)
                    )
                    for user_id in users_to_check
                ]

                timers_fired_counter.inc(len(states))

                changes = handle_timeouts(
                    states,
                    is_mine_fn=self.is_mine_id,
                    syncing_user_ids=self.get_currently_syncing_users(),
                    now=now,
                )

            run_in_background(self._update_states_and_catch_exception, changes)
        except Exception:
            logger.exception("Exception in _handle_timeouts loop")
开发者ID:DoubleMalt,项目名称:synapse,代码行数:46,代码来源:presence.py


示例17: claim_one_time_keys

    def claim_one_time_keys(self, query, timeout):
        local_query = []
        remote_queries = {}

        for user_id, device_keys in query.get("one_time_keys", {}).items():
            # we use UserID.from_string to catch invalid user ids
            if self.is_mine(UserID.from_string(user_id)):
                for device_id, algorithm in device_keys.items():
                    local_query.append((user_id, device_id, algorithm))
            else:
                domain = get_domain_from_id(user_id)
                remote_queries.setdefault(domain, {})[user_id] = device_keys

        results = yield self.store.claim_e2e_one_time_keys(local_query)

        json_result = {}
        failures = {}
        for user_id, device_keys in results.items():
            for device_id, keys in device_keys.items():
                for key_id, json_bytes in keys.items():
                    json_result.setdefault(user_id, {})[device_id] = {
                        key_id: json.loads(json_bytes)
                    }

        @defer.inlineCallbacks
        def claim_client_keys(destination):
            device_keys = remote_queries[destination]
            try:
                remote_result = yield self.federation.claim_client_keys(
                    destination,
                    {"one_time_keys": device_keys},
                    timeout=timeout
                )
                for user_id, keys in remote_result["one_time_keys"].items():
                    if user_id in device_keys:
                        json_result[user_id] = keys
            except Exception as e:
                failures[destination] = _exception_to_failure(e)

        yield make_deferred_yieldable(defer.gatherResults([
            run_in_background(claim_client_keys, destination)
            for destination in remote_queries
        ], consumeErrors=True))

        logger.info(
            "Claimed one-time-keys: %s",
            ",".join((
                "%s for %s:%s" % (key_id, user_id, device_id)
                for user_id, user_keys in iteritems(json_result)
                for device_id, device_keys in iteritems(user_keys)
                for key_id, _ in iteritems(device_keys)
            )),
        )

        defer.returnValue({
            "one_time_keys": json_result,
            "failures": failures
        })
开发者ID:DoubleMalt,项目名称:synapse,代码行数:58,代码来源:e2e_keys.py


示例18: store_file

    def store_file(self, path, file_info):
        if not file_info.server_name and not self.store_local:
            return defer.succeed(None)

        if file_info.server_name and not self.store_remote:
            return defer.succeed(None)

        if self.store_synchronous:
            return self.backend.store_file(path, file_info)
        else:
            # TODO: Handle errors.
            def store():
                try:
                    return self.backend.store_file(path, file_info)
                except Exception:
                    logger.exception("Error storing file")
            run_in_background(store)
            return defer.succeed(None)
开发者ID:matrix-org,项目名称:synapse,代码行数:18,代码来源:storage_provider.py


示例19: get_server_verify_key_v2_direct

    def get_server_verify_key_v2_direct(self, server_name, key_ids):
        keys = {}

        for requested_key_id in key_ids:
            if requested_key_id in keys:
                continue

            (response, tls_certificate) = yield fetch_server_key(
                server_name, self.hs.tls_server_context_factory,
                path=(b"/_matrix/key/v2/server/%s" % (
                    urllib.quote(requested_key_id),
                )).encode("ascii"),
            )

            if (u"signatures" not in response
                    or server_name not in response[u"signatures"]):
                raise KeyLookupError("Key response not signed by remote server")

            if "tls_fingerprints" not in response:
                raise KeyLookupError("Key response missing TLS fingerprints")

            certificate_bytes = crypto.dump_certificate(
                crypto.FILETYPE_ASN1, tls_certificate
            )
            sha256_fingerprint = hashlib.sha256(certificate_bytes).digest()
            sha256_fingerprint_b64 = encode_base64(sha256_fingerprint)

            response_sha256_fingerprints = set()
            for fingerprint in response[u"tls_fingerprints"]:
                if u"sha256" in fingerprint:
                    response_sha256_fingerprints.add(fingerprint[u"sha256"])

            if sha256_fingerprint_b64 not in response_sha256_fingerprints:
                raise KeyLookupError("TLS certificate not allowed by fingerprints")

            response_keys = yield self.process_v2_response(
                from_server=server_name,
                requested_ids=[requested_key_id],
                response_json=response,
            )

            keys.update(response_keys)

        yield logcontext.make_deferred_yieldable(defer.gatherResults(
            [
                run_in_background(
                    self.store_keys,
                    server_name=key_server_name,
                    from_server=server_name,
                    verify_keys=verify_keys,
                )
                for key_server_name, verify_keys in keys.items()
            ],
            consumeErrors=True
        ).addErrback(unwrapFirstError))

        defer.returnValue(keys)
开发者ID:rubo77,项目名称:synapse,代码行数:57,代码来源:keyring.py


示例20: _enqueue_events

    def _enqueue_events(self, events, check_redacted=True, allow_rejected=False):
        """Fetches events from the database using the _event_fetch_list. This
        allows batch and bulk fetching of events - it allows us to fetch events
        without having to create a new transaction for each request for events.
        """
        if not events:
            defer.returnValue({})

        events_d = defer.Deferred()
        with self._event_fetch_lock:
            self._event_fetch_list.append(
                (events, events_d)
            )

            self._event_fetch_lock.notify()

            if self._event_fetch_ongoing < EVENT_QUEUE_THREADS:
                self._event_fetch_ongoing += 1
                should_start = True
            else:
                should_start = False

        if should_start:
            run_as_background_process(
                "fetch_events",
                self.runWithConnection,
                self._do_fetch,
            )

        logger.debug("Loading %d events", len(events))
        with PreserveLoggingContext():
            rows = yield events_d
        logger.debug("Loaded %d events (%d rows)", len(events), len(rows))

        if not allow_rejected:
            rows[:] = [r for r in rows if not r["rejects"]]

        res = yield make_deferred_yieldable(defer.gatherResults(
            [
                run_in_background(
                    self._get_event_from_row,
                    row["internal_metadata"], row["json"], row["redacts"],
                    rejected_reason=row["rejects"],
                )
                for row in rows
            ],
            consumeErrors=True
        ))

        defer.returnValue({
            e.event.event_id: e
            for e in res if e
        })
开发者ID:DoubleMalt,项目名称:synapse,代码行数:53,代码来源:events_worker.py



注:本文中的synapse.util.logcontext.run_in_background函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python logcontext.LoggingContext类代码示例发布时间:2022-05-27
下一篇:
Python logcontext.preserve_context_over_fn函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap