content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Mohawk¶
Mohawk is an alternate Python implementation of the Hawk HTTP authorization scheme.. the living Hawk spec). It was redesigned to be more intuitive to developers, less prone to security problems, and more Pythonic.
Installation¶
Requirements:
pip install mohawk
If you want to install from source, visit
Bugs¶
You can submit bugs / patches on Github:
Important
If you think you found a security vulnerability please try emailing [email protected] before submitting a public issue.
Topics¶
- Using Mohawk
- Security Considerations
- API
- Developers
- Why Mohawk?
Framework integration¶
Mohawk is a low level library that focuses on Hawk communication. The following higher-level libraries integrate Mohawk into specific web frameworks:
- Hawkrest: adds Hawk to Django Rest Framework
- Did we miss one? Send a pull request so we can link to it.
TODO¶
- Implement bewit. The bewit URI scheme is not fully implemented at this time.
- Support NTP-like (but secure) synchronization for local server time. See TLSdate.
- Support auto-retrying a
mohawk.Senderrequest with an offset if there is timestamp skew.
Changelog¶
- 0.3.4 (2017-01-07)
- Fixed
AttributeErrorexception (it now raises
mohawk.exc.MissingAuthorization) for cases when the client sends a None type authorization header. See issue 23.
- Fixed Python 3.6 compatibility problem (a regex pattern was using the deprecated
LOCALEflag). See issue 32.
- 0.3.3 (2016-07-12)
- Fixed some cases where
mohawk.exc.MacMismatchwas raised instead of
mohawk.exc.MisComputedContentHash. This follows the Hawk HTTP authorization scheme implementation more closely. See issue 15.
- Published as a Python wheel
- 0.3.2.1 (2016-02-25)
- Re-did the
0.3.2release; the tag was missing some commits. D’oh.
- 0.3.2 (2016-02-24)
- Improved Python 3 support.
- Fixed bug in handling
extvalues that have more than one equal sign.
- Configuration objects no longer need to be strictly dicts.
- 0.3.1 (2016-01-07)
- Initial bewit support (undocumented). Complete support with documentation is still forthcoming.
- 0.3.0 (2015-06-22)
- Breaking change: The
seen_nonce()callback signature has changed. You must update your callback from
seen_nonce(nonce, timestamp)to
seen_nonce(sender_id, nonce, timestamp)to avoid unnecessary collisions. See Using a nonce to prevent replay attacks for details.
- 0.2.2 (2015-01-05)
- Receiver can now respond with a
WWW-Authenticateheader so that senders can adjust their timestamps. Thanks to jcwilson for the patch.
- 0.2.1 (2014-03-03)
- Fixed Python 2 bug in how unicode was converted to bytes when calculating a payload hash.
- 0.2.0 (2014-03-03)
- Added support for Python 3.3 or greater.
- Added support for Python 2.6 (this was just a test suite fix).
- Added six as dependency.
mohawk.Sender.request_headerand
mohawk.Receiver.response_headerare now Unicode objects. They will never contain non-ascii characters though.
- 0.1.0 (2014-02-19)
- Implemented optional content hashing per spec but in a less error prone way
- Added complete documentation
-_secondson
TokenExpiredexception per Hawk spec
- better localtime offset and skew handling
- 0.0.2 (2014-02-06)
- Responding with a custom ext now works
- Protected app and dlg according to spec when accepting responses
- 0.0.1 (2014-02-05)
- initial release of partial implementation | http://mohawk.readthedocs.io/en/latest/ | 2017-02-19T14:11:35 | CC-MAIN-2017-09 | 1487501169776.21 | [] | mohawk.readthedocs.io |
pyramid.request¶
- class
Request(environ, charset=None, unicode_errors=None, decode_param_names=None, **kw)[source]¶
A subclass of the WebOb Request class. An instance of this class is created by the router and is provided to a view callable (and to other subsystems) as the
requestargument.
The documentation below (save for the
add_response_callbackand
add_finished_callbackmethods, which are defined in this subclass itself, and the attributes
context,
registry,
root,
subpath,
traversed,
view_name,
virtual_root, and
virtual_root_path, each of which is added to the request by the Pyramid version. See for further information.
context¶
The context will be available as the
contextattribute of the request object. It will be the context object implied by the current request. See Traversal for information about context objects.
registry¶
The application registry will be available as the
registryattribute of the request object. See Using the Zope Component Architecture in Pyramid for more information about the application registry.
root¶
The root object will be available as the
rootattribute of the request object. It will be the resource object at which traversal started (the root). See Traversal for information about root objects.
subpath¶
The traversal subpath will be available as the
subpathattribute of the request object. It will be a sequence containing zero or more elements (which will be Unicode objects). See Traversal for information about the subpath.
traversed¶
The "traversal path" will be available as the
traversedattribute of the request object. It will be a sequence representing the ordered set of names that were used to traverse to the context, not including the view name or subpath. If there is a virtual root associated with the request, the virtual root path is included within the traversal path. See Traversal for more information.
view_name¶
The view name will be available as the
view_nameattribute of the request object. It will be a single string (possibly the empty string if we're rendering a default view). See Traversal for information about view names.
virtual_root¶
The virtual root will be available as the
virtual_rootattribute of the request object. It will be the virtual root object implied by the current request. See Virtual Hosting for more information about virtual roots.
virtual_root_path¶
The virtual root path will be available as the
virtual_root_pathattribute of the request object. It will be a sequence representing the ordered set of names that were used to traverse to the virtual root object. See Virtual Hosting for more information about virtual roots.
exception¶
If an exception was raised by a root factory or a view callable, or at various other points where Pyramid executes user-defined code during the processing of a request, the exception object which was caught will be available as the
exceptionattribute of the request within a exception view, a response callback or a finished callback. If no exception occurred, the value of
request.exceptionwill be
Nonewithin response and finished callbacks.
exc_info¶
If an exception was raised by a root factory or a view callable, or at various other points where Pyramid executes user-defined code during the processing of a request, result of
sys.exc_info()will be available as the
exc_infoattribute of the request within a exception view, a response callback or a finished callback. If no exception occurred, the value of
request.exc_infowill be
Nonewithin response and finished callbacks.
response[source]¶
This attribute is actually a "reified" property which returns an instance of the
pyramid.response.Responseclass. The response object returned does not exist until this attribute is accessed. Once it is accessed, subsequent accesses to this request object will return the same
Responseobject.
The
request.responseAPI can(...)or
request.response.content_type = 'text/plain', etc) within a view that uses a renderer. For example, within a view that uses a renderer:
response = request.response response.set_cookie('mycookie', 'mine, all mine!') return {'text':'Value that will be used by the renderer'}
Mutations to this response object will be preserved in the response sent to the client after rendering. For more information about using
request.responsein conjunction with a renderer, see Varying Attributes of Rendered Responses.
Non-renderer code can also make use of request.response instead of creating a response "by hand". For example, in view code:
response = request.response response.body = 'Hello!' response.content_type = 'text/plain' return response
Note that the response in this circumstance is not "global"; it still must be returned from the view code if a renderer is not used.
session[source]¶
If a session factory has been configured, this attribute will represent the current user's session object. If a session factory has not been configured, requesting the
request.sessionattribute will cause a
pyramid.exceptions.ConfigurationErrorto be raised.
matchdict¶
If a route has matched during this request, this attribute will be a dictionary containing the values matched by the URL pattern associated with the route. If a route has not matched during this request, the value of this attribute will be
None. See The Matchdict.
matched_route¶
If a route has matched during this request, this attribute will be an object representing the route matched by the URL pattern associated with the route. If a route has not matched during this request, the value of this attribute will be
None. See The Matched Route.
authenticated_userid¶
New in version 1.5.
A property which returns the userid of the currently authenticated user or
Noneif there is no authentication policy in effect or there is no currently authenticated user. This differs from
unauthenticated_userid, because the effective authentication policy will have ensured that a record associated with the userid exists in persistent storage; if it has not, this value will be
None.
unauthenticated_userid¶
New in version 1.5.
A property which returns a value which represents the claimed (not verified) userid of the credentials present in the request.
Noneif there is no authentication policy in effect or there is no user data associated with the current request. This differs from
authenticated_userid, because the effective authentication policy will not ensure that a record associated with the userid exists in persistent storage. Even if the userid does not exist in persistent storage, this value will be the value of the userid claimed by the request data.
effective_principals¶
New in version 1.5.
A property which returns the list of 'effective' principal identifiers for this request. This list typically includes the userid of the currently authenticated user if a user is currently authenticated, but this depends on the authentication policy in effect. If no authentication policy is in effect, this will return a sequence containing only the
pyramid.security.Everyoneprincipal.
invoke_subrequest(request, use_tweens=False)¶
New in version 1.4a1.
Obtain a response object from the Pyramid application based on information in the
requestobject provided. The
requestobject must be an object that implements the Pyramid request interface (such as a
pyramid.request.Requestinstance). If
use_tweensis
True, the request will be sent to the tween in the tween stack closest to the request ingress. If
use_tweensis
False, the request will be sent to the main router handler, and no tweens will be invoked.
This function also:
- manages the threadlocal stack (so that
get_current_request()and
get_current_registry()work during a request)
- Adds a
registryattribute (the current Pyramid registry) and a
invoke_subrequestattribute (a callable) to the request object it's handed.
- sets request extensions (such as those added via
add_request_method()or
set_request_property()) on the request it's passed.
- causes a
NewRequestevent to be sent at the beginning of request processing.
- causes a
ContextFoundevent to be sent when a context resource is found.
- Ensures that the user implied by the request passed has the necessary authorization to invoke view callable before calling it.
- Calls any response callback functions defined within the request's lifetime if a response is obtained from the Pyramid application.
- causes a
NewResponseevent to be sent if a response is obtained.
- Calls any finished callback functions defined within the request's lifetime.
invoke_subrequestisn't actually a method of the Request object; it's a callable added when the Pyramid router is invoked, or when a subrequest is invoked. This means that it's not available for use on a request provided by e.g. the
pshellenvironment.
See also
See also Invoking a Subrequest.
invoke_exception_view(exc_info=None, request=None, secure=True)¶
Executes an exception view related to the request it's called upon. The arguments it takes are these:
exc_infoIf provided, should be a 3-tuple in the form provided by
sys.exc_info(). If not provided,
sys.exc_info()will be called to obtain the current interpreter exception information. Default:
None.
requestIf the request to be used is not the same one as the instance that this method is called upon, it may be passed here. Default:
None.
secureIf the exception view should not be rendered if the current user does not have the appropriate permission, this should be
True. Default:
True.
If called with no arguments, it uses the global exception information returned by
sys.exc_info()as
exc_info, the request object that this method is attached to as the
request, and
Truefor
secure.
This method returns a response object or raises
pyramid.httpexceptions.HTTPNotFoundif a matching view cannot be found.
has_permission(permission, context=None)¶
Given a permission and an optional context, returns an instance of
pyramid.security.Allowedif the permission is granted to this request with the provided context, or the context already associated with the request. Otherwise, returns an instance of
pyramid.security.Denied. This method delegates to the current authentication and authorization policies. Returns
pyramid.security.Allowedunconditionally if no authentication policy has been registered for this request. If
contextis not supplied or is supplied as
None, the context used is the
request.contextattribute.
New in version 1.5.
add_response_callback(callback)¶
Add a callback to the set of callbacks to be called by the router at a point after a response object is successfully created. Pyramid does not have a global response object: this functionality allows an application to register an action to be performed against the response once one is created.
A 'callback' is a callable which accepts two positional parameters:
requestand
response. For example:
Response callbacks are called in the order they're added (first-to-most-recently-added). No response callback is called if an exception happens in application code, or if the response object returned by view code is invalid.
All response callbacks are called after the tweens and before the
pyramid.events.NewResponseevent is sent.
Errors raised by callbacks are not handled specially. They will be propagated to the caller of the Pyramid router application.
See also
See also Using Response Callbacks.
add_finished_callback(callback)¶
Add a callback to the set of callbacks to be called unconditionally by the router at the very end of request processing.
callbackis a callable which accepts a single positional parameter:
request. For example: router. They are called after response processing has already occurred in a top-level
finally:block within the router request processing code. As a result, mutations performed to the
requestprovided.
See also
See also Using Finished Callbacks.
route_url(route_name, *elements, **kw)¶
Generates a fully qualified URL for a named Pyramid route configuration.
Use the route's
nameas the first positional argument. Additional positional arguments (
*elements) are appended to the URL as path segments after it is generated.
Use keyword arguments to supply values which match any dynamic path elements in the route definition. Raises a
KeyErrorexception if the URL cannot be generated for any reason (not enough arguments, for example).
For example, if you've defined a route named "foobar" with the path
{foo}/{bar}/*traverse:
request.route_url('foobar', foo='1') => <KeyError exception> request.route_url('foobar', foo='1', bar='2') => <KeyError exception> request.route_url('foobar', foo='1', bar='2', traverse=('a','b')) => request.route_url('foobar', foo='1', bar='2', traverse='/a/b') =>
Values replacing
:segmentarguments can be passed as strings or Unicode objects. They will be encoded to UTF-8 and URL-quoted before being placed into the generated URL.
Values replacing
*remainderarguments can be passed as strings or tuples of Unicode/string values. If a tuple is passed as a
*remainderreplacement value, its values are URL-quoted and encoded to UTF-8. The resulting strings are joined with slashes and rendered into the URL. If a string is passed as a
*remainderreplacement value, it is tacked on to the URL after being URL-quoted-except-for-embedded-slashes.
If no
_querykeyword argument is provided, the request query string will be returned in the URL. If it is present, it will be used to compose a query string that will be tacked on to the end of the URL, replacing any request query string.format. quoted per RFC 3986#section-3.5 and used as a named anchor in the generated URL (e.g. if
_anchoris passed as
fooand the route.
Note that if
_schemeis passed as
https, and
_app_urlis present, it. If
_app_urlis not specified, the result of
request.application_urlwill be used as the prefix (the default).
If both
_app_urland any of
_scheme,
_host, or
_portare passed,
_app_urltakes precedence and any values passed for
_scheme,
_host, and
_portwill be ignored.
This function raises a
KeyErrorif the URL cannot be generated due to missing replacement names. Extra replacement names are ignored.
If the route object which matches the
route_nameargument has a pregenerator, the
*elementsand
**kwarguments passed to this function might be augmented or changed.
route_path(route_name, *elements, **kw)¶
Generates a path (aka a 'relative URL', a URL minus the host, scheme, and port) for a named Pyramid route configuration.
This function accepts the same argument as
pyramid.request.Request.route_url()and performs the same duty. It just omits the host, port, and scheme information in the return value; only the script_name, path, query parameters, and anchor data are present in the returned string.
For example, if you've defined a route named 'foobar' with the path
/{foo}/{bar}, this call to
route_path:
request.route_path('foobar', foo='1', bar='2')
Will return the string
/1/2.
Note
Calling
request.route_path('route')is the same as calling
request.route_url('route', _app_url=request.script_name).
pyramid.request.Request.route_path()is, in fact, implemented in terms of
pyramid.request.Request.route_url()in just this way. As a result, any
_app_urlpassed within the
**kwvalues to
route_pathwill be ignored.
current_route_url(*elements, **kw)¶
Generates a fully qualified URL for a named Pyramid route configuration based on the 'current route'.
This function supplements
pyramid.request.Request.route_url(). It presents an easy way to generate a URL for the 'current route' (defined as the route which matched when the request was generated).
The arguments to this method have the same meaning as those with the same names passed to
pyramid.request.Request.route_url(). It also understands an extra argument which
route_urldoes not named
_route_name.
The route name used to generate a URL is taken from either the
_route_namekeyword argument or the name of the route which is currently associated with the request if
_route_namewas not passed. Keys and values from the current request matchdict are combined with the
kwarguments to form a set of defaults named
newkw. Then
request.route_url(route_name, *elements, **newkw)is called, returning a URL.
Examples follow.
If the 'current route' has the route pattern
/foo/{page}and the current url path is
/foo/1, the matchdict will be
{'page':'1'}. The result of
request.current_route_url()in this situation will be
/foo/1.
If the 'current route' has the route pattern
/foo/{page}and the current url path is
/foo/1, the matchdict will be
{'page':'1'}. The result of
request.current_route_url(page='2')in this situation will be
/foo/2.
Usage of the
_route_namekeyword argument: if our routing table defines routes
/foo/{action}named 'foo' and
/foo/{action}/{page}named
fooaction, and the current url pattern is
/foo/view(which has matched the
/foo/{action}route), we may want to use the matchdict args to generate a URL to the
fooactionroute. In this scenario,
request.current_route_url(_route_name='fooaction', page='5')Will return string like:
/foo/view/5.
current_route_path(*elements, **kw)¶
Generates a path (aka a 'relative URL', a URL minus the host, scheme, and port) for the Pyramid route configuration matched by the current request.
This function accepts the same argument as
pyramid.request.Request.current_route_url()and performs the same duty. It just omits the host, port, and scheme information in the return value; only the script_name, path, query parameters, and anchor data are present in the returned string.
For example, if the route matched by the current request has the pattern
/{foo}/{bar}, this call to
current_route_path:
request.current_route_path(foo='1', bar='2')
Will return the string
/1/2.
Note
Calling
request.current_route_path('route')is the same as calling
request.current_route_url('route', _app_url=request.script_name).
pyramid.request.Request.current_route_path()is, in fact, implemented in terms of
pyramid.request.Request.current_route_url()in just this way. As a result, any
_app_urlpassed within the
**kwvalues to
current_route_pathwill be ignored.
static_url(path, **kw)¶
Generates a fully qualified URL for a static asset. The asset must live within a location defined via the
pyramid.config.Configurator.add_static_view()configuration declaration (see Serving Static Assets).
Example:
request.static_url('mypackage:static/foo.css') =>
The
pathargument points at a file or directory on disk which a URL should be generated for. The
pathmay be either a relative path (e.g.
static/foo.css) or an absolute path (e.g.
/abspath/to/static/foo.css) or a asset specification (e.g.
mypackage:static/foo.css).
The purpose of the
**kwargument is the same as the purpose of the
pyramid.request.Request.route_url()
**kwargument. See the documentation for that function to understand the arguments which you can provide to it. However, typically, you don't need to pass anything as
*kwwhen generating a static asset URL.
This function raises a
ValueErrorif a static view definition cannot be found which matches the path specification.
static_path(path, **kw)¶
Generates a path (aka a 'relative URL', a URL minus the host, scheme, and port) for a static resource.
This function accepts the same argument as
pyramid.request.Request.static_url()and performs the same duty. It just omits the host, port, and scheme information in the return value; only the script_name, path, query parameters, and anchor data are present in the returned string.
Example:
request.static_path('mypackage:static/foo.css') => /static/foo.css
Note
Calling
request.static_path(apath)is the same as calling
request.static_url(apath, _app_url=request.script_name).
pyramid.request.Request.static_path()is, in fact, implemented in terms of :meth:`pyramid.request.Request.static_url in just this way. As a result, any
_app_urlpassed within the
**kwvalues to
static_pathwill be ignored.
resource_url(resource, *elements, **kw)¶
Generate a string representing the absolute URL of the resource object based on the
wsgi.url_scheme,
HTTP_HOSTor
SERVER_NAMEin the request, plus any
SCRIPT_NAME. The overall result of this method is always a UTF-8 encoded string.
Examples:
request.resource_url(resource) => request.resource_url(resource, 'a.html') => request.resource_url(resource, 'a.html', query={'q':'1'}) => request.resource_url(resource, 'a.html', anchor='abc') => request.resource_url(resource, app_url='') => /
Any positional arguments passed in as
elementsmust be strings Unicode objects, or integer objects. These will be joined by slashes and appended to the generated resource URL. Each of the elements passed in is URL-quoted before being appended; if any element is Unicode, it will converted to a UTF-8 bytestring before being URL-quoted. If any element is an integer, it will be converted to its string representation before being URL-quoted.
Warning
if no
elementsarguments are specified, the resource URL will end with a trailing slash. If any
elementsare used, the generated URL will not end in a trailing slash.
If a keyword argument
queryis present, it will be used to compose a query string that will be tacked on to the end of the URL.encoding. used as a named anchor in the generated URL (e.g. if
anchoris passed as
fooand the resource.
If
schemeis passed as
https, and an explicit argument
app_urlis passed and is not
None, it should be a string that will be used as the port/hostname/initial path portion of the generated URL instead of the default request application URL. For example, if
app_url='', then the resulting url of a resource that has a path of
/baz/barwill be. If you want to generate completely relative URLs with no leading scheme, host, port, or initial path, you can pass
app_url=''. Passing
app_url=''when the resource path is
/baz/barwill return
/baz/bar.
New in version 1.3:
app_url
If
app_urlis passed and any of
scheme,
port, or
hostare also passed,
app_urlwill take precedence and the values passed for
scheme,
host, and/or
portwill be ignored.
If the
resourcepassed in has a
__resource_url__method, it will be used to generate the URL (scheme, host, port, path) for the base resource which is operated upon by this function.
See also
See also Overriding Resource URL Generation.
New in version 1.5:
route_name,
route_kw, and
route_remainder_name
If
route_nameis passed, this function will delegate its URL production to the
route_urlfunction. Calling
resource_url(someresource, 'element1', 'element2', query={'a':1}, route_name='blogentry')is roughly equivalent to doing:
traversal_path = request.resource_path(someobject) url = request.route_url( 'blogentry', 'element1', 'element2', _query={'a':'1'}, traverse=traversal_path, )
It is only sensible to pass
route_nameif the route being named has a
*remainderstararg value such as
*traverse. The remainder value will be ignored in the output otherwise.
By default, the resource path value will be passed as the name
traversewhen
route_urlis called. You can influence this by passing a different
route_remainder_namevalue if the route has a different
*starargvalue at its end. For example if the route pattern you want to replace has a
*subpathstararg ala
/foo*subpath:
request.resource_url( resource, route_name='myroute', route_remainder_name='subpath' )
If
route_nameis passed, it is also permissible to pass
route_kw, which will passed as additional keyword arguments to
route_url. Saying
resource_url(someresource, 'element1', 'element2', route_name='blogentry', route_kw={'id':'4'}, _query={'a':'1'})is roughly equivalent to:
traversal_path = request.resource_path_tuple(someobject) kw = {'id':'4', '_query':{'a':'1'}, 'traverse':traversal_path} url = request.route_url( 'blogentry', 'element1', 'element2', **kw, )
If
route_kwor
route_remainder_nameis passed, but
route_nameis not passed, both
route_kwand
route_remainder_namewill be ignored. If
route_nameis passed, the
__resource_url__method of the resource passed is ignored unconditionally. This feature is incompatible with resources which generate their own URLs.
Note
If the resource used is the result of a traversal, it must be location-aware. The resource can also be the context of a URL dispatch; contexts found this way do not need to be location-aware.
Note
If a 'virtual root path' is present in the request environment (the value of the WSGI environ key
HTTP_X_VHM_ROOT), and the resource was obtained via traversal, the URL path will not include the virtual root prefix (it will be stripped off the left hand side of the generated URL).
Note
For backwards compatibility purposes, this method is also aliased as the
model_urlmethod of request.
resource_path(resource, *elements, **kw)¶
Generates a path (aka a 'relative URL', a URL minus the host, scheme, and port) for a resource.
This function accepts the same argument as
pyramid.request.Request.resource_url()and performs the same duty. It just omits the host, port, and scheme information in the return value; only the script_name, path, query parameters, and anchor data are present in the returned string.
Note
Calling
request.resource_path(resource)is the same as calling
request.resource_path(resource, app_url=request.script_name).
pyramid.request.Request.resource_path()is, in fact, implemented in terms of
pyramid.request.Request.resource_url()in just this way. As a result, any
app_urlpassed within the
**kwvalues to
route_pathwill be ignored.
scheme,
host, and
portare also ignored.
json_body¶
This property will return the JSON-decoded variant of the request body. If the request body is not well-formed JSON, or there is no body associated with this request, this property will raise an exception.
See also
See also Dealing with a JSON-Encoded Request Body.
set_property(callable, name=None, reify=False)¶
Add a callable or a property descriptor to the request instance.
Properties, unlike attributes, are lazily evaluated by executing an underlying callable when accessed. They can be useful for adding features to an object without any cost if those features go unused.
A property may also be reified via the
pyramid.decorator.reifydecorator by setting
reify=True, allowing the result of the evaluation to be cached. Thus the value of the property is only computed once for the lifetime of the object.
callablecan either be a callable that accepts the request as its single positional parameter, or it can be a property descriptor.
If the
callableis a property descriptor a
ValueErrorwill be raised if
nameis
Noneor
reifyis
True.
If
nameis None, the name of the property will be computed from the name of the
callable.
The subscriber doesn't actually connect to the database, it just provides the API which, when accessed via
request.db, will create the connection. Thanks to reify, only one connection is made per-request even if
request.dbis accessed many times.
This pattern provides a way to augment the
requestobject without having to subclass it, which can be useful for extension authors.
New in version 1.3.
localizer¶
A localizer which will use the current locale name to translate values.
New in version 1.5.
locale_name¶
The locale name of the current request as computed by the locale negotiator.
New in version 1.5.
Return a MultiDict containing all the variables from a form request. Returns an empty dict-like object for non-form requests.
Form requests are typically POST requests, however PUT & PATCH requests with an appropriate Content-Type are also supported.
accept¶
Gets and sets the
Acceptheader (HTTP spec section 14.1).
accept_charset¶
Gets and sets the
Accept-Charsetheader (HTTP spec section 14.2).
accept_encoding¶
Gets and sets the
Accept-Encodingheader (HTTP spec section 14.3).
accept_language¶
Gets and sets the
Accept-Languageheader (HTTP spec section 14.4).: domain = domain.split(':',.
from_bytes(b)¶
Create a request from HTTP bytes data. If the bytes contain extra data after the request, raise a ValueError._response(ob)[source]¶
Return
Trueif the object passed as
obis a valid response object,
Falseotherwise..
localizer
Convenience property to return a localizer 'size'
'Pops'.
response[source]
This attribute is actually a "reified" property which returns an instance of the
pyramid.response.Response. class. The response object returned does not exist until this attribute is accessed. Subsequent accesses will return the same Response object.
The
request.responseAPI()) within a view that uses a renderer. Mutations to this response object will be preserved in the response sent to the client.
send()
session[source]
Obtain the session object associated with this request. If a session factory has not been registered during application configuration, a
pyramid.exceptions.ConfigurationErrorwill be raised).
Note
For information about the API of a multidict structure (such as
that used as
request.GET,
request.POST, and
request.params),
see
pyramid.interfaces.IMultiDict.
apply_request_extensions(request)[source]¶
Apply request extensions (methods and properties) to an instance of
pyramid.interfaces.IRequest. This method is dependent on the
requestcontaining a properly initialized registry.
After invoking this method, the
requestshould have the methods and properties that were defined using
pyramid.config.Configurator.add_request_method(). | http://docs.pylonsproject.org/projects/pyramid/en/master/api/request.html | 2017-02-19T14:19:13 | CC-MAIN-2017-09 | 1487501169776.21 | [] | docs.pylonsproject.org |
BackendApplicationClient¶
- class
oauthlib.oauth2.
Backend client credentials grant workflow.
The client can request an access token using only its client credentials (or other supported means of authentication) when the client is requesting access to the protected resources under its control, or those of another resource owner which has been previously arranged with the authorization server (the method of which is beyond the scope of this specification).
The client credentials grant type MUST only be used by confidential clients.
Since the client authentication is used as the authorization grant, no additional authorization request is needed.
prepare_request_body(body=u'', scope=None, **kwargs)[source]¶
Add the client credentials to the request body.
The client makes a request to the token endpoint by adding the following parameters using the “application/x-www-form-urlencoded” format per Appendix B in the HTTP request entity-body:
The client MUST authenticate with the authorization server as described in Section 3.2.1.
The prepared body will include all provided credentials as well as the
grant_typeparameter set to
client_credentials:
>>> from oauthlib.oauth2 import BackendApplicationClient >>> client = BackendApplicationClient('your_id') >>> client.prepare_request_body(scope=['hello', 'world']) 'grant_type=client_credentials&scope=hello+world' | http://oauthlib.readthedocs.io/en/latest/oauth2/clients/backendapplicationclient.html | 2017-02-19T14:18:52 | CC-MAIN-2017-09 | 1487501169776.21 | [] | oauthlib.readthedocs.io |
Behavior
Safe Behaviors and Practices¶
A key aspect of safety in the workspace is developing good habits and practices. Doing so can help dramatically reduce the likelihood of an accident, and subsequently, an injury. Establishing safe habits also helps with the growth of a good safety culture.
Robot Operation¶
- Safety rules for robot operation encompass all aspects when dealing with the robot, whether it be handling it for transportation, testing its functions, or driving it.
- When lifting the robot, remember to lift with the legs, not with the back
- Keep your back straight and vertical when lifting. The robot is very heavy
- Always have somebody else help you when lifting the robot. Never lift it by yourself
- Lift the robot by the frame, never the bumpers. Doing so not only damages the bumpers, but also increases the chance of the attachments failing, causing the robot to fall
- When turning to robot on to test functions, be sure to inform all teammates around you, as well as ensuring that all hands are off the robot
- If testing the drivetrain and other mobility function of the robot, make sure that it is rested on blocks with the wheels raised from the work surface
- Ensure that any sharp edges or corners are filed down to prevent injury when interacting/working on the robot
- When driving the robot, make sure that the space around the robot is clear and that everyone knows it is on
- Only qualified members or members who receive permission to may drive the robot
Last update: 2021-10-14 | https://docs.bobabots253.org/safety/behavior/ | 2021-10-16T12:41:37 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.bobabots253.org |
Mirantis Kubernetes Engine API limitations¶
To ensure the Mirantis Container Cloud stability in managing the Container Cloud-based Mirantis Kubernetes Engine (MKE) clusters, the following MKE API functionality is not available for the Container Cloud-based MKE clusters as compared to the attached MKE clusters that are not deployed by Container Cloud. Use the Container Cloud web UI or CLI for this functionality instead. | https://docs.mirantis.com/container-cloud/latest/ref-arch/mke-api-limitations.html | 2021-10-16T13:19:55 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.mirantis.com |
Welcome to SCADE Developer Central, you one stop shop for documentation and learning for native mobile cross platform development with Swift 5.
Find our roadmap here - looking forward to your feedback
Releae
Release Date
Features
2.1 Beta
March 2021
Advanced binding functionality
2.0 GA
January 2021
Swift 5.3
New Swift Foundation features
2.0 Beta
December 2020
Groudbreaking new SCADEIDE
Swift 5.2 support
Many changes to support Swift 5.2+5.3
Updated 9 months ago
You can only suggest edits to Markdown body content, but not to the API spec. | https://docs.scade.io/docs/product-roadmap | 2021-10-16T11:00:46 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.scade.io |
The potential of ryegrass as cover crop to reduce soil N2O emissions and increase the population size of denitrifying bacteria
Wang, Haitao; Beule, Lukas; Zang, Huadong; Pfeiffer, Birgit; Ma, Shutan; Karlovsky, Petr; Dittert, Klaus, 2020: The potential of ryegrass as cover crop to reduce soil N2O emissions and increase the population size of denitrifying bacteria. In: European Journal of Soil Science, DOI 10.1111/ejss.13047. with four levels of N fertilizer (0, 5, 10 and 20 g N m−2; applied as calcium ammonium nitrate). The closed‐chamber approach was used to measure soil N2O fluxes. Real‐time PCR was used to estimate the biomass of bacteria and fungi and the abundance of genes involved in denitrification in soil. The results showed that the presence of ryegrass decreased the nitrate content in soil. Cumulative N2O emissions of soil with grass were lower than in bare soil at 5 and 10 g N m−2. Fertilization levels did not affect the abundance of soil bacteria and fungi. Soil with grass showed greater abundances of bacteria and fungi, as well as microorganisms carrying narG, napA, nirK, nirS and nosZ clade I genes. It is concluded that ryegrass serving as a cover crop holds the potential to mitigate soil N2O emissions in soils with moderate or high NO3− concentrations. This highlights the importance of cover crops for the reduction of N2O emissions from soil, particularly following N fertilization. Future research should explore the full potential of ryegrass to reduce soil N2O emissions under field conditions as well as in different soils. Highlights This study was to investigate whether ryegrass serving as cover crop affects soil N2O emissions and denitrifier community size; Plant reduced soil N substrates on one side, but their root exudates stimulated denitrification on the other side; N2O emissions were lower in soil with grass than bare soil at medium fertilizer levels, and growing grass stimulated the proliferation of almost all the denitrifying bacteria except nosZ clade II; Ryegrass serving as a cover crop holds the potential to mitigate soil N2O emissions.
Statistik:View Statistics
Collection
Subjects:denitrification
perennial ryegrass (Lolium perenne L.)
soil bacteria
soil CO2 emissions
soil N2O emissions
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. | https://e-docs.geo-leo.de/handle/11858/8451 | 2021-10-16T12:40:49 | CC-MAIN-2021-43 | 1634323584567.81 | [] | e-docs.geo-leo.de |
EncyptedString
Field:
MAX_LENGTH
The maximum text length that the class can contain. It is also the upper limit of the
SecureString class.
Method:
EncryptedString Parse(String)
EncryptedString is the method that takes the value to be stored encrypted, encrypts it after making the controls, and creates the EncryptedString object.
The
EncryptedString object holding the given value is returned as encrypted. The returned object is plain (
IsPlain). |Parameter Name | Description |
|-----|------|
|decryptedString|Value to encrypt. Can not be null and can not be longer than
MAX_LENGTH. The default value is returned if a value (
string.IsNullOrWhiteSpace) is given that is either blank or empty.|
Error Type: ArgumentNullException (This error is thrown if the value is set to null.)
Error Type: FormatException (This error is thrown if the value is longer than
MAX_LENGTH.)
Method:
EncryptedString FromEncrypted(Binary, Binary, String})
It uses the asymmetrically encrypted data to create an
EncryptedString object. It is obtained by giving function to decrypt encrypted data and password.
When sensitive data is read from the database, the
EncryptedString object is created with this method. Thus, decryption is only postponed as needed.
The
EncryptedString object holding the encrypted value is returned. The returned object is encrypted (
IsEncrypted).
Error Type: ArgumentNullException (This error is thrown if the decryptDelegate parameter is set to null.)
Property:
IsPlain
Indicates that the record is encrypted by symmetric key in memory. Returns true if the object was created with the Parse method or the SecureString field constructor.
Property:
IsEncrypted
Indicates that the record is encrypted in memory with an asymmetric key. Returns true if the object was created using the
FromEncrypted method.
Property:
IsEmpty
Default value. It can be obtained by
default(EncrypedString).
Property:
EncryptedValue
IsEncrypted returns the encrypted data with the asymmetric key. If not, returns null data.
Constructor(SecureString)
Converts the given value to the
EncryptedString object. Creates a plain object (
IsPlain).
Method:
string Decrypt()
It allows the sensitive data to be solved and turned explicitly. This is a costly process, so it needs to be done only when needed. If the object
IsEncrypted is slow, then
IsPlain is fast.
Example; Let's say the CVV data is kept encrypted. This data will only be needed during spending. In this case, the data can be decrypted and sent to the related system. Or let's say spending service. In this case, if the stored CVV value equals the stored CVV value, it is decrypted before it is compared.
If the value is default (
default(EncryptedString)), an empty
string is returned.
Returns the decrypted of the Sensitive data
Method:
string ToString()
Directly calls the
Decrypt method. This method is implemented due to the
Parse/ToString pattern when calling internal services. It is recommended to use Decrypt in code in the same way as Decrypt.
Returns the decrypted of the Sensitive data
Method:
SecureString ToSecureString()
Converts the object to a
SecureString object. If
IsEncrypted is true, it is first decrypted with the asymmetric key, then converted to the
SecureString object with the symmetric key. Otherwise it will return directly to the SecureString object it is already hosting. So if the object
IsEncrypted is slow,
IsPlain is fast.
Returns the SecureString state of the data held by the object. | https://docs.gazel.io/value-types/encryptedstring.html | 2021-10-16T11:50:27 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.gazel.io |
pty — Pseudo-terminal utilities¶
Source code: Lib/pty.py:
pty.
fork()¶).
pty.
openpty()¶
Open a new pseudo-terminal pair, using
os.openpty()if possible, or emulation code for generic Unix systems. Return a pair of file descriptors
(master, slave), for the master and the slave end, respectively.
pty.
spawn(argv[, master_read[, stdin_read]])¶
Spawn a process, and connect its controlling terminal with the current process’s standard io. This is often used to baffle programs which insist on reading from the controlling terminal. It is expected that the process spawned behind the pty will eventually terminate, and when it does spawn will return.
The functions master_read and stdin_read are passed a file descriptor which they should read from, and they should always return a byte string. In order to force spawn to return before the child process exits an
OSErrorshould be thrown.
The default implementation for both functions will read and return up to 1024 bytes each time the function is called. The master_read callback is passed the pseudoterminal’s master file descriptor to read output from the child process, and stdin_read is passed file descriptor 0, to read from the parent process’s standard input.
Returning an empty byte string from either callback is interpreted as an end-of-file (EOF) condition, and that callback will not be called after that. If stdin_read signals EOF the controlling terminal can no longer communicate with the parent process OR the child process. Unless the child process will quit without any input, spawn will then loop forever. If master_read signals EOF the same behavior results (on linux at least).
If both callbacks signal EOF then spawn will probably never return, unless select throws an error on your platform when passed three empty lists. This is a bug, documented in issue 26228.
Changed in version 3.4:
spawn()now returns the status value from
os.waitpid()on the child process.
Example¶
The following program acts like the Unix command script(1), using a pseudo-terminal to record all input and output of a terminal session in a “typescript”.
import argparse import os import pty import sys import time parser = argparse.ArgumentParser() parser.add_argument('-a', dest='append', action='store_true') parser.add_argument('-p', dest='use_python', action='store_true') parser.add_argument('filename', nargs='?', default='typescript') options = parser.parse_args() shell = sys.executable if options.use_python else os.environ.get('SHELL', 'sh') filename = options.filename mode = 'ab' if options.append else 'wb' with open(filename, mode) as script: def read(fd): data = os.read(fd, 1024) script.write(data) return data print('Script started, file is', filename) script.write(('Script started on %s\n' % time.asctime()).encode()) pty.spawn(shell, read) script.write(('Script done on %s\n' % time.asctime()).encode()) print('Script done, file is', filename) | https://docs.python.org/3.7/library/pty.html | 2021-10-16T11:07:48 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.python.org |
The quality of protection (QOP) defines the strength of the encryption algorithms the system uses when transmitting messages between Teradata Vantage and its clients.
There are 2 types of policy subject to Quality of Protection (QOP):
- An integrity QOP determines the strength of the algorithm the system uses for calculating the checksum that guarantees message integrity.
- A confidentiality QOP determines the strength of the algorithm for encrypting a message exchange between a client and Vantage. In the absence of a confidentiality QOP policy, client requests for confidentiality use the DEFAULT QOP. See Encryption.
You can assign confidentiality and integrity policies by:
- Database user name or directory user name
- Database profile
- Client IP address
Users who access the database through a middle-tier application that uses pooled sessions are subject to the security policies assigned to the application logon user, rather than the policies assigned to them as individuals.
You can also enforce use of the DEFAULT confidentiality QOP by host group ID. See Requiring Confidentiality.
Java clients do not support encryption stronger than AES-128 without installation of a special security policy package. See QOP Configuration Change Guidelines. | https://docs.teradata.com/r/8Mw0Cvnkhv1mk1LEFcFLpw/6Z4sHnsu3ChpRklMqGNHkA | 2021-10-16T11:20:00 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.teradata.com |
time of the recorded clip relative to when StartRecording was called.
For example, if we started recording at second 10, and ended recording at second 15, then this will have a value of 5. If the buffer is not initialized (StartRecording is not called), the value of this property will be -1. See Also: recorderStartTime. | https://docs.unity3d.com/2019.3/Documentation/ScriptReference/Animator-recorderStopTime.html | 2021-10-16T13:15:46 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.unity3d.com |
Customization
- Best practices for customizing Deploy
- Customizing the login screen
- Configure the task execution engine
- Create a custom step for rules
- Add input hints in configuration items
- Add a checkpoint to a custom plugin
- Using the View As feature
- Writing Jython scripts for Deploy
- Defining a synthetic enum property
- Using variables and expressions in FreeMarker templates
- Automatically archive tasks according to a user-defined policy
- Automatically purge packages according to a user-defined policy
- Automatically purge the task archive according to a user-defined policy
Deploy CLI
- Types used in the Deploy CLI
- Objects available in the Deploy CLI
- Execute tasks from the Deploy CLI
- Work with configuration items in the Deploy CLI
- Set up roles and permissions using the Deploy CLI
- Export items from or import items into the repository
- Configure the CLI to trust a Deploy server certificate
- Discover middleware using the Deploy CLI
- Troubleshooting the Deploy CLI | https://docs.xebialabs.com/v.10.2/deploy/get-started/ | 2021-10-16T12:31:31 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.xebialabs.com |
The copper(II)‐binding tripeptide GHK, a valuable crystallization and phasing tag for macromolecular crystallography
Mehr, Alexander
Henneberg, Fabian
Chari, Ashwin
Görlich, Dirk
Huyton, Trevor
Mehr, Alexander; Henneberg, Fabian; Chari, Ashwin; Görlich, Dirk; Huyton, Trevor, 2020: The copper(II)‐binding tripeptide GHK, a valuable crystallization and phasing tag for macromolecular crystallography. In: Acta Crystallographica Section D, Band 76, 1222 - 1232, DOI 10.1107/S2059798320013741.
The growth of diffraction‐quality crystals and experimental phasing remain two of the main bottlenecks in protein crystallography. Here, the high‐affinity copper(II)‐binding tripeptide GHK was fused to the N‐terminus of a GFP variant and an MBP‐FG peptide fusion. The GHK tag promoted crystallization, with various residues (His, Asp, His/Pro) from symmetry molecules completing the copper(II) square‐pyramidal coordination sphere. Rapid structure determination by copper SAD phasing could be achieved, even at a very low Bijvoet ratio or after significant radiation damage. When collecting highly redundant data at a wavelength close to the copper absorption edge, residual S‐atom positions could also be located in log‐likelihood‐gradient maps and used to improve the phases. The GHK copper SAD method provides a convenient way of both crystallizing and phasing macromolecular structures, and will complement the current trend towards native sulfur SAD and MR‐SAD phasing.A novel three‐residue tag containing the residues GHK that can be used to promote crystallization and in SAD phasing experiments using its tightly bound copper ion is described. image
Statistik:View Statistics
Collection
Subjects:phasing
crystallization
GHK
SAD
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. | https://e-docs.geo-leo.de/handle/11858/8452 | 2021-10-16T12:27:50 | CC-MAIN-2021-43 | 1634323584567.81 | [] | e-docs.geo-leo.de |
Table of Contents
Product Index. | http://docs.daz3d.com/doku.php/public/read_me/index/4699/start | 2021-10-16T13:40:39 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.daz3d.com |
STEP 1: Log in at app.amplicare.com.
Tip: Bookmark this page on your computer workstations for easy access.
STEP 2: Search for the patient and go to their "Medicare Plans" tab
STEP 3: Enter or confirm your patient’s current plan and subsidy information.
Learn more about how to check current plan or subsidy.
STEP 4: Confirm your patient’s active drug list is up-to-date
Manually add maintenance medications that they're filling at another pharmacy.
Deactivate one-time use medications (most one-time use medications are automatically deactivated).
Click on any of the medications to edit their day supply and/or quantity if needed.
STEP 5: Select the appropriate plan types to include in the comparison
By default the plan type selected will match the patient's current plan type entered in their Amplicare profile. We recommend keeping the patient in the same plan type as their current plan unless the patient explicitly states otherwise.
Learn more about the different plan types.
STEP 6: Adjust the sorting and filtering options
Show: "All Plans" or "Preferred Plans" -- We recommend keeping "All Plans" selected, which will show you all plans that are in-network with your pharmacy and available in your patient's zip code. There's a misconception that preferred plans will be the cheapest plan option for your patient to fill at your pharmacy, but oftentimes that's not actually not the case. Learn more about preferred plans.
Sort: "Out-of-Pocket", "Annual Deductible", "Monthly Premium", "CMS Star Ratings", "Est. DIR Fees", and "Est. Revenue" -- Sort the plans based on your patient's preferences. By default the plans will be sorted by "out-of-pocket", which is a total of the patient's copays and monthly premiums.
Effective: {{month}} -- this will specify the month when the patient's new plan coverage will go into effect once they enroll in the plan.
Nearby Pharmacy: Leave this option as is. By default all plan pricing reflects what the patient would pay if they fill at your pharmacy. Learn more about when to use the Nearby Pharmacy tool.
STEP 7: Review the information for each of the resulting plans
Plan name and plan's CMS star rating. Hover your mouse over "Info" to see a tooltip including the plan's CMS contract and plan ID, as well as BIN, PCN, and Group numbers.
Monthly premium (what the patient will pay directly to the plan each month as a flat fee) and annual deductible (traditionally what the patient will have to pay for their medications out-of-pocket before the plan begins to cover a portion of their costs).
Out-of-pocket cost. This number reflects the total amount the patient is projected to pay out-of-pocket (monthly premiums and copays for medications).
Plan tags indicate important information about the plan's preferred network and coverage. Learn more about what each of these tags mean ("Preferred", "Non-Preferred", "Mail Order", "Chain", "Benchmark", and "Gap Coverage").
Drug restrictions indicate if there will be any restrictions for any of the patient's active medications with that plan. Click the link to see a dropdown with specific information. Learn more about plan formulary restrictions.
Est. DIR Fee and Est. Revenue: Hover your mouse over these numbers to see a tooltip with more information. Learn more about considering DIR fees while doing a plan comparison.
Tip: If you're working directly with a patient we recommend switching to "Patient View" to hide irrelevant information for them (i.e. estimated revenue and DIR fees). Learn more about "Patient View" vs. "Pharmacy View".
STEP 8: Select multiple plans to compare a breakdown in out-of-pocket costs
Select the plans in which the patient would like to see a further breakdown in costs by clicking the little box on the bottom left of each plan. Then, click COMPARE PLANS on the lower right of the screen.
We recommend selecting at least two other plans in addition to the patient's current plan to compare side-by-side.
Step 9: Compare the breakdown in costs on the Monthly Cost page
Click the name of a month, and it will expand to show the specific medications the patient will fill that month. Under the "Copay" column you'll see what their associated copay will be. Under "Full cost" you'll see what the full cost of the medication is (a sum of the patient's copay and the insurance reimbursement). Note: The "Full cost" column will be hidden in "Patient View".
This page will also clearly show how the patient transitions through different phases of coverage (Initial Coverage, Donut Hole, Catastrophic Coverage). Learn more about how to discuss the Medicare phases of coverage with your patients.
Note: If your patient is interested in comparing Medicare Advantage (MA-PD) plans, there are more factors to consider.
STEP 10: Help your patient enroll
After reviewing the information on the Monthly Cost page, your patients can decide to enroll in a new plan. Learn more about the options to help them do this!
What's Next?
Learn more about best practices for consulting with your patients about Medicare plans. | https://docs.amplicare.com/en/articles/21884-performing-a-plan-comparison | 2021-10-16T12:50:44 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['https://downloads.intercomcdn.com/i/o/164648449/680550cfa54ab95f5a227e05/Screen+Shot+2019-11-20+at+12.08.56+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/164648819/6c44284f8cc1c99ee5f37559/Screen+Shot+2019-11-20+at+12.11.10+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/164649300/ac7b1317fe522ec67e4cb6a0/Screen+Shot+2019-11-20+at+12.12.48+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/164660707/bb62a51209ef73554a0764fc/Screen+Shot+2019-11-20+at+12.50.21+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/164652814/5bd8e3de83e7500128d07a50/Screen+Shot+2019-11-20+at+12.22.57+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/164652919/9373390bf266cd69cbf90320/Screen+Shot+2019-11-20+at+12.23.16+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/164661613/54b8b66ef0bcd5ad3811f146/Screen+Shot+2019-11-20+at+12.54.00+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/141351502/d739bb8b3ff9d53e8ef21311/Screen+Shot+2019-08-15+at+10.11.57+AM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/164668519/6dc61586eae005bdc77b5dc6/Screen+Shot+2019-11-20+at+1.18.41+PM.png',
None], dtype=object) ] | docs.amplicare.com |
Error code: sss_invalid_sublist_operation
Error message: "Failed to create, update/transform record because You have attempted an invalid sublist or line item operation. You are either trying to access a field on a non-existent line or you are trying to add or remove lines from a static sublist. Tip: Please make sure that all the line items that you're trying to use exist on the record. Settlement details: Settlement Id# 0123456789, Amazon AFN Order# 123-9999999-000000."
Reason: Settlement Payment flow creates Customer Payments in NetSuite via Transactions > Customer > Accept Customer payment. This page has a limit of 10,000 transactions. If the Invoice the Settlement is trying to pay does not show on this page, that is when you will see this error. Due to NetSuite limitation, customer payments can’t be applied against an invoice when the customer has more than 10000 open invoices.
Resolution: Usually, this gets resolved when you click Retry data. If it doesn't resolve you'll need to further check:
- If the invoice is showing on this page or if it is included in the 10,000 limit.
- If it is not showing, you'll need to check why it is not showing or process those showing on the page first to free up some space so that other Invoices will get in the 10,000 limit.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360056612412-Resolve-Amazon-error-Invalid-sublist-or-line-item-operation | 2021-10-16T12:18:49 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.celigo.com |
Text, Text Area, Email, and Rich Text fields
Let’s look at the text fields and how they function.
Text field
The Text field lets you collect data on a single line. Text fields accept letters and symbols from any language, numbers, and special characters.
Expressions/formulas
See all the text expressions you can use as a part of a formula.
Validations
There are 13 types of validations you can use for a text field. Learn more about all the validations.
Text Area field
The Text Area field lets you collect data on multiple lines. It also accepts letters and symbols from any language, numbers, and special characters.
You can apply formatting to the data in a text area, including bold, italics, hyperlink, subscript, and superscript. To remove any formatting, highlight the text and click the Remove formatting button (
).
You cannot use any expressions or formulas with Text Area fields.
Validations
There are 4 types of validations you can use with a Text Area field:
- Contains
- Does not contain
- Max length
- Min length
The Email field lets you collect valid email addresses. This field requires a “@” and “.” from the user to be considered valid.
Settings
You cannot use any expressions or formulas with email fields.
Validations
There are 8 types of validations you can use for an email field.
Richtext
The Rich text field lets you add any kind of static text, image, or video to any part of a form.
There are multiple formatting options such as:
- Bold
- Italics
- Underline
- Strikethrough
- Text color
- Alignment
- H1 and H2 headings
- Hyperlinks
- Bullets
- Blockquote
- Add an image
- Add a video clip
You can use the Insert field button to add dynamic text based on other form or system fields.
There are no validations or expressions for the Rich Text field. | https://docs.kissflow.com/article/mlqv59xchf-text-fields-text-textarea-richtext | 2021-10-16T12:03:24 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['https://files.helpdocs.io/vy1bn54mxh/articles/mlqv59xchf/1592815684986/screenshot-2020-06-22-at-2-17-56-pm.png',
None], dtype=object) ] | docs.kissflow.com |
API Gateway use cases
Topics
Use API Gateway to create HTTP APIs
HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs.
You can use HTTP APIs to send requests to Amazon Lambda functions or to any publicly routable HTTP endpoint.
For example, you can create an HTTP API that integrates with a Lambda function on the backend. When a client calls your API, API Gateway sends the request to the Lambda function and returns the function's response to the client.
HTTP APIs support OpenID
Connect
To learn more, see Choosing between HTTP APIs and REST APIs.
Use API Gateway to create REST APIs
An API Gateway REST API is made up of resources and methods. A resource is a logical entity that an app can access through a resource path. A method corresponds to a REST API request that is submitted by the user of your API and the response returned to the user.
For example,
/incomes could be the path of a resource representing
the income of the app user. A resource can have one or more operations that are
defined by appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE. A
combination of a resource path and an operation identifies a method of the API. For
example, a
POST /incomes method could add an income earned by the
caller, and a
GET /expenses method could query the reported expenses
incurred by the caller.
The app doesn't need to know where the requested data is stored and fetched from on the backend. In API Gateway REST APIs, the frontend is encapsulated by method requests and method responses. The API interfaces with the backend by means of integration requests and integration responses.
For example, with DynamoDB as the backend, the API developer sets up the integration request to forward the incoming method request to the chosen backend. The setup includes specifications of an appropriate DynamoDB action, required IAM role and policies, and required input data transformation. The backend returns the result to API Gateway as an integration response.
To route the integration response to an appropriate method response (of a given
HTTP status code) to the client, you can configure the integration response to map
required response parameters from integration to method. You then translate the
output data format of the backend to that of the frontend, if necessary. API Gateway
enables you to define a schema or model for the payload
API Gateway provides REST API management functionality such as the following:
Support for generating SDKs and creating API documentation using API Gateway extensions to OpenAPI
Throttling of HTTP requests
Use API Gateway to create WebSocket APIs
In a WebSocket API, the client and the server can both send messages to each other at any time. Backend servers can easily push data to connected users and devices, avoiding the need to implement complex polling mechanisms.
For example, you could build a serverless application using an API Gateway WebSocket API and Amazon Lambda to send and receive messages to and from individual users or groups of users in a chat room. Or you could invoke backend services such as Amazon Lambda, Amazon Kinesis, or an HTTP endpoint based on message content.
You can use API Gateway WebSocket APIs to build secure, real-time communication applications without having to provision or manage any servers to manage connections or large-scale data exchanges. Targeted use cases include real-time applications such as the following:
Chat applications
Real-time dashboards such as stock tickers
Real-time alerts and notifications
API Gateway provides WebSocket API management functionality such as the following:
Monitoring and throttling of connections and messages
Using Amazon X-Ray to trace messages as they travel through the APIs to backend services
Easy integration with HTTP/HTTPS endpoints
Who uses API Gateway?
There are two kinds of developers who use API Gateway: API developers and app developers.
An API developer creates and deploys an API to enable the required functionality in API Gateway. The API developer must be an IAM user in the Amazon account that owns the API.
An app developer builds a functioning application to call Amazon services by invoking a WebSocket or REST API created by an API developer in API Gateway.
The app developer is the customer of the API developer. The app developer doesn't need to have an Amazon account, provided that the API either doesn't require IAM permissions or supports authorization of users through third-party federated identity providers supported by Amazon Cognito user pool identity federation. Such identity providers include Amazon, Amazon Cognito user pools, Facebook, and Google.
Creating and managing an API Gateway API
An API developer works with the API Gateway service component for API management,
named
apigateway, to create, configure, and deploy an API.
As an API developer, you can create and manage an API by using the API Gateway console, described in Getting started with API Gateway, or by calling the API references. There are several ways to call this API. They include using the Amazon Command Line Interface (Amazon CLI), or by using an Amazon SDK. In addition, you can enable API creation with Amazon CloudFormation templates or (in the case of REST APIs and HTTP APIs) Working with API Gateway extensions to OpenAPI.
For a list of Regions where API Gateway is available, as well as the associated control service endpoints, see Amazon API Gateway Endpoints and Quotas.
Calling an API Gateway API
An app developer works with the API Gateway service component for API execution,
named
execute-api, to invoke an API that was created or deployed in
API Gateway. The underlying programming entities are exposed by the created API. There
are several ways to call such an API. To learn more, see Invoking a REST API in Amazon API Gateway and Invoking a WebSocket API. | https://docs.amazonaws.cn/en_us/apigateway/latest/developerguide/api-gateway-overview-developer-experience.html | 2021-10-16T12:30:10 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.amazonaws.cn |
Compatibility for changing the instance type
You can resize an instance only if its current instance type and the new instance type that you want are compatible in the following ways:
Virtualization type: Linux AMIs use one of two types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). You can't resize an instance that was launched from a PV AMI to an instance type that is HVM only. For more information, see Linux AMI virtualization types. To check the virtualization type of your instance, see the Virtualization field on the details pane of the Instances screen in the Amazon EC2 console..
Network cards: Some instance types support multiple network cards. You must select an instance type that supports the same number of network cards as the current instance type.). Therefore, to mount file systems at boot time using
/etc/fstab, you must use UUID/Label instead of device names.
AMI: For information about the AMIs required by instance types that support enhanced networking and NVMe, see the Release Notes in the following documentation: | https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/resize-limitations.html | 2021-10-16T12:16:34 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.aws.amazon.com |
Developing for the Plugin System
This will most likely be deprecated in favour of adding the possible extensions to the core code base.
This documentation will hopefully give you a basis for how to write a plugin for LibreNMS. A test plugin is included in LibreNMS distribution.
Generic structure
Plugins need to be installed into html/plugins
The structure of a plugin is follows:
html/plugins /PluginName /PluginName.php /PluginName.inc.php
The above structure is checked before a plugin can be installed.
All files / folder names are case sensitive and must match.
PluginName - This is a directory and needs to be named as per the plugin you are creating.
- PluginName.php :: This file is used to process calls into the plugin from the main LibreNMS install. Here only functions within the class for your plugin that LibreNMS calls will be executed. For a list of currently enabled system hooks, please see further down. The minimum code required in this file is (replace Test with the name of your plugin):
<?php class Test { } ?>
- PluginName.inc.php :: This file is the main included file when browsing to the plugin itself. You can use this to display / edit / remove whatever you like. The minimum code required in this file is:
<?php ?>
System Hooks
System hooks are called as functions within your plugin class. The following system hooks are currently available:
- menu() :: This is called to build the plugin menu system and you can use this to link to your plugin (you don't have to).
public static function menu() { echo('<li><a href="plugin/p='.get_class().'">'.get_class().'</a></li>'); }
- device_overview_container($device) :: This is called in the Device Overview page. You receive the $device as a parameter, can do your work here and display your results in a frame.
public static function device_overview_container($device) { echo('<div class="container-fluid"><div class="row"> <div class="col-md-12"> <div class="panel panel-default panel-condensed"> <div class="panel-heading"><strong>'.get_class().' Plugin </strong> </div>'); echo(' Example plugin in "Device - Overview" tab <br>'); echo('</div></div></div></div>'); }
- port_container($device, $port) :: This is called in the Port page, in the "Plugins" menu_option that will appear when your plugin gets enabled. In this function, you can do your work and display your results in a frame.
public static function port_container($device, $port) { echo('<div class="container-fluid"><div class="row"> <div class="col-md-12"> <div class="panel panel-default panel-condensed"> <div class="panel-heading"><strong>'.get_class().' plugin in "Port" tab</strong> </div>'); echo ('Example display in Port tab</br>'); echo('</div></div></div></div>'); } | https://docs.librenms.org/Extensions/Plugin-System/ | 2021-10-16T11:19:29 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.librenms.org |
OpenCV provides two transformation functions, cv.warpAffine and cv.warpPerspective, with which you can perform all kinds of transformations. cv.warpAffine takes a 2x3 transformation matrix while cv.warpPerspective takes a 3x3 transformation matrix as input.
Scaling is just resizing of the image. OpenCV comes with a function cv.resize() for this purpose. The size of the image can be specified manually, or you can specify the scaling factor. Different interpolation methods are used. Preferable interpolation methods are cv.INTER_AREA for shrinking and cv.INTER_CUBIC (slow) & cv.INTER_LINEAR for zooming. By default, the interpolation method cv.INTER_LINEAR is used for all resizing purposes. You can resize an input image with either of following methods:
Translation is the shifting of an object's location. If you know the shift in the (x,y) direction and let it be \((t_x,t_y)\), you can create the transformation matrix \(\textbf{M}\) as follows:
\[M = \begin{bmatrix} 1 & 0 & t_x \\ 0 & 1 & t_y \end{bmatrix}\]
You can take make it into a Numpy array of type np.float32 and pass it into the cv.warpAffine() function. See the below example for a shift of (100,50):
warning
The third argument of the cv.warpAffine() function is the size of the output image, which should be in the form of **(width, height)**. Remember width = number of columns, and height = number of rows.
See the result below:
Rotation of an image for an angle \(\theta\) is achieved by the transformation matrix of the form
\[M = \begin{bmatrix} cos\theta & -sin\theta \\ sin\theta & cos\theta \end{bmatrix}\]
But OpenCV provides scaled rotation with adjustable center of rotation so that you can rotate at any location you prefer. The modified transformation matrix is given by
\[\begin{bmatrix} \alpha & \beta & (1- \alpha ) \cdot center.x - \beta \cdot center.y \\ - \beta & \alpha & \beta \cdot center.x + (1- \alpha ) \cdot center.y \end{bmatrix}\]
where:
\[\begin{array}{l} \alpha = scale \cdot \cos \theta , \\ \beta = scale \cdot \sin \theta \end{array}\]
To find this transformation matrix, OpenCV provides a function, cv.getRotationMatrix2D. Check out the below example which rotates the image by 90 degree with respect to center without any scaling.
See the result:
In affine transformation, all parallel lines in the original image will still be parallel in the output image. To find the transformation matrix, we need three points from the input image and their corresponding locations in the output image. Then cv.getAffineTransform will create a 2x3 matrix which is to be passed to cv.warpAffine.
Check the below example, and also look at the points I selected (which are marked in green color):
See the result:
For perspective transformation, you need a 3x3 transformation matrix. Straight lines will remain straight even after the transformation. To find this transformation matrix, you need 4 points on the input image and corresponding points on the output image. Among these 4 points, 3 of them should not be collinear. Then the transformation matrix can be found by the function cv.getPerspectiveTransform. Then apply cv.warpPerspective with this 3x3 transformation matrix.
See the code below:
Result: | https://docs.opencv.org/4.5.2/da/d6e/tutorial_py_geometric_transformations.html | 2021-10-16T12:11:21 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.opencv.org |
Reference hardware
The reference hardware provides a set of specifications to use when scoping and scaling the Splunk platform. It also sets a baseline for performance when handling search and indexing loads.
Reference machine for single-instance deployments
The following machine requirements represent the basic building block of a Splunk Enterprise deployment.
- Intel x86 64-bit chip architecture
- 12 CPU cores at 2Ghz or greater per core.
- 12GB RAM
- Standard 1Gb Ethernet NIC, optional second NIC for a management network
- Standard 64-bit Linux or Windows distribution
Disk subsystem
The disk subsystem for a reference machine should be capable of handling a high number of averaged a hard drive can produce, the more data it can index and search in a given period of time. While many variable items factor into the amount of IOPS that a hard drive can produce, the following are three most important elements:
- Its rotational speed in revolutions per minute.
- Its average latency, which is the amount of time it takes to spin its platters half a rotation.
- Its average seek time, which is the amount of time it takes to retrieve a requested block of data.
To get the most IOPS out of a hard drive, choose drives that have high rotational speeds and low average latency and seek times. Every drive manufacturer provides this information, and some provide much more.
For information on IOPS and how to calculate them, see: Insufficient disk I/O is the most common limitation found in a Splunk infrastructure. Carefully in this manual. in this manual.
Reference machine for distributed deployments
As the number of active users increases along with the data ingestion rate, the architecture requirements change from a single instance to a distributed Splunk Enterprise environment. The search head and indexer roles have unique hardware recommendations.
Dedicated search head
A search head will utilize CPU resources more consistently than an indexer, but does not require the fast disk throughput or a large pool of local storage for indexing.
- Intel 64-bit chip architecture
- 16 CPU cores at 2Ghz or greater per core.
- 12GB RAM
- 2 x 300GB, 10,000 RPM SAS hard disks, configured in RAID 1
- A 1Gb Ethernet NIC, optional 2nd NIC for a management network
- A 64-bit Linux or Windows distribution
A search request uses 1 CPU core while running. Add additional CPU cores to a search head to accommodate more active users and a higher concurrent search load. You must account for scheduled searches when provisioning a search head. For a review on how searches are prioritized, see the topic Configure the priority of scheduled reports in the Reporting Manual. For information on scaling search performance, see How to maximize search performance in this manual.
Indexer
Distributing the indexing process allows the Splunk platform to scale data consumption into terabytes a day. A single indexer carries the same disk I/O bandwidth requirements as a group of indexers. Adding additional indexers allows the work of search requests and data indexing to be shared across many instances.
- Intel 64-bit chip architecture.
- 12 CPU cores at 2Ghz or greater per core.
- 12GB RAM.
- Disk subsystem capable of 800 average IOPS. For details, see the topic Disk subsystem.
- A 1Gb Ethernet NIC, with optional second NIC for a management network.
- A 64-bit Linux or Windows distribution.
If the indexer CPU cores exceeds the reference machine specifications, consider implementing the Parallelization settings to improve the indexer performance for specific use cases.
As a guideline for planning your storage infrastructure, indexers do many bulk reads and disk seeks. At higher daily volumes, local disk might not provide cost-effective storage for the time frames where you want a fast search. Optionally, deploy fast attached storage or networked storage, such as storage area networks (SAN) over fiber, that can provide the required IOPS per indexer.
- More disks (specifically, more spindles) are better for indexing performance.
- Total throughput of the entire system is important.
- The ratio of disks to disk controllers in a particular system should be higher, similar to how you provision a database server.
Ratio of indexers to search heads
There is no practical limitation on the number of search heads an indexer can support, or on the number of indexers a search head can search against. The use-case will determine what Splunk instance role (search head or indexer) the infrastructure will need to scale while maintaining performance. For a table with scaling guidelines, see Summary of performance recommendations in this manual.
Premium solutions app requirements
Premium apps can require greater hardware resources than the standard reference machines provide. Before architecting a deployment for a premium app, review the app documentation for scaling and hardware recommendations.
Virtual hardware
Splunk platform instances are supported in a virtual hosting environment. A VMWare hosted indexer with reserved resources that meet the "reference hardware" specifications can consume data about 10 to 15 percent slower than a indexer hosted on a bare-metal machine. Search performance in a virtual hosting environment is a close match to bare-metal machines.
This describes a best-case scenario that does not account for resource contention with other active virtual machines sharing the same physical server or storage array. It also does not account for certain vendor-specific I/O enhancement techniques, such as Direct I/O or Raw Device Mapping.
For recommendations on running Splunk Enterprise in a VMWare virtual machine, see Deploying Splunk Enterprise Inside Virtual Environments on the main Splunk website. site.
- As a hyper thread of a core, a vCPU acts as a core, but the physical core must schedule its workload among other workloads of other vCPUs that the physical core handles.
For indexing and data storage, note that:
- required IOPS necessary for indexing and searching. See EBS - Product Details () on the AWS site.
- Not every EC2 instance type offers the network throughput to the EBS volume that you need. To ensure that bandwidth you must either launch the instance4 as "EBS-optimized" or choose an instance type that provides a minimum of 10Gb of bandwidth. See Amazon EC2 Instance Configuration () on the AWS site.
For forwarding, note that the proximity of your cloud infrastructure to your forwarders can have a major impact on performance of the whole environment.
For recommendations on running Splunk Enterprise in AWS, see Deploying Splunk Enterprise On Amazon Webservices on the main Splunk website.! | https://docs.splunk.com/Documentation/Splunk/6.3.13/Capacity/Referencehardware | 2021-10-16T12:23:36 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Matrix keyboard shortcuts
Shortcuts in the Windows and macOS columns are specific to the operating system they reference. Unified shortcuts are new in Matrix 6 and work across all browsers and operating systems.
macOS keyboard symbol legend
- ⌃
Control key.
- ⌥
Option key.
- ⌘
Place of Interest/Command key.
- ⇧
Shift key. | https://docs.squiz.net/matrix/version/latest/using/concepts/keyboard-shortcuts.html | 2021-10-16T12:19:12 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.squiz.net |
Organisation chart
The organisational chart is a visual representation of your organisation’s structure. You can also click through to the staff profile of a colleague by clicking on their name within the org chart.
Within the org chart, if a staff member has a team of people working to them, you’ll be able to click the 'reveal' down arrow (
) to expose their team members. You can also click the 'expand' symbol (
) to 'zoom in' so that you can see the expanded team more clearly. From the expanded view you can click the 'back-up-a-level' symbol (
) to return to your previous view of the org chart. It sounds more complex than it is—have a go and you’ll see for yourself!
The data for the org chart and relationships between roles, is controlled by your organisation and auto-magically presented in this neat org chart format. If you can see that you or a colleague are in the the wrong position within the chart, you can contact your organisation’s Squiz Workplace administrators. An easy way to do this is through the Feedback form available from every page. | https://docs.squiz.net/workplace/v3.8/using/workplace-features/organisational-chart.html | 2021-10-16T11:39:21 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.squiz.net |
Register for a live Demo
Get a Quote
Search…
Introduction
Architecture
Getting Started Section
Getting Started
Installing Thinfinity VirtualUI
Simple UI Remoting
Compiling and Testing a WinForms Application
Compiling and Testing a Delphi Application
Compiling and Testing a C++ Application
Registering the Application in Thinfinity VirtualUI Server
Accessing the App from the Web
Application Execution Behavior
List of available demos for download
Accessing the App from the Web
Follow the next steps to access registered applications using the web browser:
1. Open your preferred web browser.
2. Type in the application URL. This URL is composed of the server URL plus the Virtual Path configured for the application, i.e.
Alternatively, leave the Virtual Path as the root path using just
. In this case, a page with the list of applications will show up, unless you have
set a profile to be the default application
, in which case you will be connected to the Default Application.
a. Check the 'Open in a new browser window' option if you want the application to be opened in another tab.
b. Click on the corresponding icon of the application you want to access.
3. Authenticated users can log out with the 'Logout' button in the top right corner of the web interface.
Read more
:
·
Application Execution Behavior
Registering the Application in Thinfinity VirtualUI Server
Application Execution Behavior
Last modified
1yr ago
Export as PDF
Copy link | https://thinfinity-vui-v3-docs.cybelesoft.com/getting-started-1/getting-started/simple-ui-remoting/accessing-the-app-from-the-web | 2021-10-16T12:13:18 | CC-MAIN-2021-43 | 1634323584567.81 | [] | thinfinity-vui-v3-docs.cybelesoft.com |
Installation¶
🐸TTS supports python >=3.6 <=3.9 and tested on Ubuntu 18.10, 19.10, 20.10.
Using
pip¶
pip is recommended if you want to use 🐸TTS only for inference.
You can install from PyPI as follows:
pip install TTS # from PyPI
By default, this only installs the requirements for PyTorch. To install the tensorflow dependencies as well, use the
tf extra.
pip install TTS[tf]
Or install from Github:
pip install git+ # from Github
Installing From Source¶
This is recommended for development and more control over 🐸TTS.
git clone cd TTS make system-deps # only on Linux systems. make install
On Windows¶
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here]( | https://tts.readthedocs.io/en/latest/installation.html | 2021-10-16T11:44:21 | CC-MAIN-2021-43 | 1634323584567.81 | [] | tts.readthedocs.io |
Result Metric
Result metric (
aitoolbox.experiment.core_metrics) is an abstraction built around the calculation of the single
performance metric. It helps keep the code base more reusable and better structured, especially when used as part of
the encapsulating Result Package.
AIToolbox comes out of the box with implemented several commonly used performance evaluation metrics implemented as result metrics. These can be found in:
Use of Result Metrics inside Result Packages
As it is described in the Implementing New Result Packages section, result metrics come in handy when developing
the result packages which are wrapping together multiple metrics needed to evaluate a certain ML task. To support this
chaining together of multiple performance metrics, the result metric abstraction offers a convenient metric
concatenation and result package dictionary creation via the
+ operator. To create the dictionary holding all
the performance metric results the user can simply write:
metric_1 + metric_2 + metric_3 + .... This makes the use
of the
+ operator very convenient because the produced results dictionary format exactly matches that which is
required when developing an encapsulating result package.
Example of result metric concatenation:
from aitoolbox.experiment.core_metrics.classification import \ AccuracyMetric, ROCAUCMetric, PrecisionRecallCurveAUCMetric accuracy_result = AccuracyMetric(y_true, y_predicted) roc_auc_result = ROCAUCMetric(y_true, y_predicted) pr_auc_result = PrecisionRecallCurveAUCMetric(y_true, y_predicted) results_dict = accuracy_result + roc_auc_result + pr_auc_result # results_dict will hold: # {'Accuracy': 0.95, 'ROC_AUC': 0.88, 'PrecisionRecall_AUC': 0.67}
Implementing New Result Metrics
When the needed result metric is not available in the AIToolbox, the users can easily implement their own new metrics. The approach is very similar to that of the new result package development.
In order to implement
a new result metric, the user has to create a new metric class which inherits from the base abstract result metric
aitoolbox.experiment.core_metrics.abstract_metric.AbstractBaseMetric and implements the abstract method
aitoolbox.experiment.core_metrics.abstract_metric.AbstractBaseMetric.calculate_metric().
As part of the
calculate_metric() the user has to implement the logic for the performance metric calculation and
return the metric result from the method. Predicted values and ground truth values normally needed for the performance
metric calculations are available inside the metric as object attributes and can thus be accessed as:
self.y_true
and
self.y_predicted throughout the metric class,
calculate_metric() included.
Example Result Metric implementation:
from sklearn.metrics import accuracy_score from aitoolbox.experiment.core_metrics.abstract_metric import AbstractBaseMetric class ExampleAccuracyMetric(AbstractBaseMetric): def __init__(self, y_true, y_predicted, positive_class_thresh=0.5): # All additional attributes should be defined before the AbstractBaseMetric.__init__ self.positive_class_thresh = positive_class_thresh AbstractBaseMetric.__init__(self, y_true, y_predicted, metric_name='Accuracy') def calculate_metric(self): if self.positive_class_thresh is not None: self.y_predicted = self.y_predicted >= self.positive_class_thresh return accuracy_score(self.y_true, self.y_predicted) | https://aitoolbox.readthedocs.io/en/latest/experiment/metrics.html | 2021-10-16T11:16:09 | CC-MAIN-2021-43 | 1634323584567.81 | [] | aitoolbox.readthedocs.io |
User Story - shows the Order, Type and Status fields, and tasks are sorted by order:
Copado also allows you to add attachments awaiting manual intervention.
In manual tasks, you can also select the environments to which the task should be applied in the Apply to picklist field. You can choose to apply a manual task to all the environments in the pipeline or to specific environments. If you don’t specify anything, the task is applied by default in all environments. Additionally, you can specify whether the task should apply or not to back-promotions. If you don’t want to enforce a particular manual task on back-promotions, simply select the Disable Task for Back-Promotions checkbox.
How to Create a Deployment Task
To create a deployment task, follow the steps below:
- Navigate to a User Story record.
- Click on Related and navigate to the Deployment Tasks related list.
- Click on New and select an option from the Type drop-down menu:
>>IMAGE are executed first. Let’s take a look at the chart below to see how this works:
. | https://docs.copado.com/article/zwkd2mx9zq-user-stories-deployment-tasks | 2021-10-16T11:19:01 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['https://files.helpdocs.io/U8pXPShac2/articles/zwkd2mx9zq/1625499539823/task-order.png',
'Tasks sorter by order'], dtype=object)
array(['https://files.helpdocs.io/U8pXPShac2/articles/zwkd2mx9zq/1625499387448/attach-file.png',
'Attach File option'], dtype=object)
array(['https://files.helpdocs.io/U8pXPShac2/articles/zwkd2mx9zq/1625499480444/manual-task-details.png',
'Disable Task for Back-Promotions'], dtype=object)
array(['https://files.helpdocs.io/U8pXPShac2/articles/zwkd2mx9zq/1605549422225/us-tasks.png',
None], dtype=object)
array(['https://files.helpdocs.io/U8pXPShac2/articles/zwkd2mx9zq/1575544348536/deployment-tasks-order.png',
None], dtype=object) ] | docs.copado.com |
Service
Queue Event. Inequality(ServiceQueueEvent, ServiceQueueEvent) Operator
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
public static bool operator != (Microsoft.SqlServer.Management.Smo.ServiceQueueEvent a, Microsoft.SqlServer.Management.Smo.ServiceQueueEvent b);
static member op_Inequality : Microsoft.SqlServer.Management.Smo.ServiceQueueEvent * Microsoft.SqlServer.Management.Smo.ServiceQueueEvent -> bool
Public Shared Operator != (a As ServiceQueueEvent, b As ServiceQueueEvent) As Boolean | https://docs.microsoft.com/fr-fr/dotnet/api/microsoft.sqlserver.management.smo.servicequeueevent.op_inequality?view=sql-smo-160 | 2021-10-16T13:18:40 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.microsoft.com |
Certificate Hierarchy Guide
Overview
The Corda security design heavily relies on the use of Public Key Infrastructure (PKI). The platform itself operates with the assumption that a certificate authority will manage the node on-boarding and permissioning processes. As such, there is an inherent requirement to provide an easy approach towards certificate hierarchy generation and deployment. The PKI Tool provides a simple way to define and create all the keys and certificates for your PKI.
In Corda, we distinguish between two types of PKI entities: Certificate Authorities (CA) and Signers (non-CA). The difference between the two is that CA can issue certificates and non-CA cannot. The latter one is limited only to signing data. Each of those entities maintains its own key pair (public and private keys) that is used to authenticate and sign data. Moreover, each of them needs to be certified (i.e. hold a certificate issued) by another CA. An entity’s certificate binds the legal name of the entity to its public key, with the signature of the certificate’s issuer providing the attestation to this binding. As well as issuing certificates, each CA is also responsible for maintaining information about certificate’s validity. Certificates can become invalid due to different reasons (e.g. keys being compromised or cessation of operation) and as such need to be revoked.
To be able to know whether a certificate has been revoked, each CA maintains a Certificate Revocation List (CRL). That CRL needs to be published such that it can be accessed by anybody who may participate in the network. By default the CRL is exposed via the Identity Manager Service, although it is recommended that this endpoint is wrapped in a caching HTTP proxy. This proxy layer can be behind a load balancer, providing high availability for delivery of the CRL.
With all of the above in mind, the output of the PKI Tool execution is a certificate hierarchy comprising of the key pairs (for each defined entity) accompanied with the certificates associated with those key pairs as well as signed static certificate revocation lists.
The PKI Tool is intended to make it easy to generate all the certificates needed for a Corda deployment. The tool generates the keys in the desired key store(s) and outputs a set of certificates necessary for correct Corda Network operation.
Corda Requirements
Corda nodes operate with the following assumptions on the certificates hierarchy:
- There are two certificates, one corresponding to the Identity Manager Service and the other one to the Network Map Service.
- They need to have the common root certificate, which is present in the node’s truststore. The length of the certificate chain can be arbitrary. As such, there can be any number of certificates between the Identity Manager Service and Network Map Service certificates as long as they root to the same certificate.
- They need to have a custom extension defining the role of the certificate in the context of Corda. See here for more details.
Other than that, Corda nodes stay agnostic to the certificate hierarchy (in particular the depth of the certificate hierarchy tree).
At the time of writing this document, the Corda Network assumes the certificate hierarchy that can be found here
.
Certificate Revocation List (CRL)
Every time two nodes communicate with each other they exchange their certificates and validate them against the Certificate Revocation List. In Corda, the certificate chains of the nodes are validated only during the TLS handshake. This means that every time an TLS connection is established between two nodes, the TLS certificates (together with the remaining certificate chain ending at the root certificate) are exchanged and validated at each node.
The network operator is responsible for certificate issuance and maintenance for each certificate starting at the Root certificate and ending at the Identity Manager and Network Map certificates. The rest of the certificate chain (i.e. every certificate below the Identity Manager certificate) falls into node operator responsibility.
The certificate revocation list verification applies to the entire chain. This means that every certificate in the chain
is going to be validated against the corresponding certificate revocation list during the SSL handshake.
Consequently, this means that a node operator is expected to provide and maintain the certificate revocation list for the Node CA.
Even though Corda supports this scenario, it might be a tedious task that a node operator does not want to deal with.
As such, Corda offers also an alternative solution, which allows a node to benefit from the certificate revocation list validation and at the
same time waives away the necessity of the certificate revocation list maintenance from the node operator.
The certificate revocation list validation process allows the certificate revocation list to be signed by a third party
authority (i.e. associated key pair) as long as its certificate is self-signed and trusted (i.e. it is present in the node’s trust store).
As such, in Corda, the certificate revocation list for the TLS level is signed by a dedicated self-signed certificate called TLS Signer,
which is then added to node’s trust store (in a similar way as the Corda Root certificate - distributed with the
network-trust-store.jks).
During the certificate revocation list validation process the trust store is consulted for the presence of the TLS Signer certificate.
What is the expected behaviour if a certificate is revoked?
Once a certificate is revoked (including the signing of a new CRL), nodes on the network should identify the change quickly. In CENM 1.3 and above,.3 and Signing Services
for
configuration of the Signing Service for CRLs (especially the
updatePeriod option).
Example Scenario
As an example, let us consider the following certificate hierarchy:
The certificate hierarchy presented above is currently (as of the time of writing this document) used in the Corda Network.
It follows practices applicable for certificate authorities providing a balance between security and simplicity of usage.
In this scenario, a network operator wants to create a CA hierarchy where the self-signed Root CA issues a certificate for the Subordinate CA which in turn issues
two certificates for both Identity Manager CA and Network Map (note that the Network Map is not a CA-type entity).
The root certificate is self-signed and its keys are to be protected with the highest security level. In normal circumstances,
they would be used just once to sign lover-level certificates (in this case the Subordinate CA) and then placed in some secure location,
preferably not being accessed anymore.
Further down in the hierarchy, the Subordinate certificate is then used to issue other certificates for other CAs.
Additionally, there is the TLS CRL signer entity, which is also self-signed and does not act as a CA.
As a matter of fact, for the purpose of signing the TLS CRL, we could reuse the Root CA certificate (as it is self-signed and is assumed to be in the network trust store),
however to keep the split of responsibilities let us assume that the network operator uses a separate certificate for that purpose.
Therefore, the TLS CRL signer certificate’s sole purpose is to sign a certificate revocation list, therefore the security constraints can be relaxed in this case,
compared to those applied to the Root CA.
As mentioned at the beginning of this document, each CA needs to maintain its own certificate revocation list. Therefore, along with the keys and certificates being created for all of those four entities, three certificate revocation lists need to be created and signed by Root CA, Subordinate CA and TLS CRL Signer respectively. The TLS CRL Signer differs from the others as, although it is not a CA, it still signs a certificate revocation list. That list is used by nodes during the SSL handshake process and in case where a node (which is a CA) is not able to provide for its own certificate revocation list. Regarding the Identity Manager, here we assume that the certificate revocation list is kept in the database and therefore no static (i.e. file based) certificate revocation list signing is required.
With all of those in mind we can see the certificate revocation list of the TLS chain validation process as follows:
- The Root CA certificate is self-signed and trusted - present in the node’s trust store. As such it does not require any certificate revocation list validation.
- The Subordinate CA certificate is validated by checking the certificate revocation list signed by the Root CA. In the diagram in the previous section, it is given as a static file called
root.crl.
- The Identity Manager Service CA certificate is validated by checking the certificate revocation list signed by the Subordinate CA. In the diagram in the previous section, it is given as a static file called
subordinate.crl.
- The Node CA certificate is validated by checking the certificate revocation list signed by the Identity Manager Service CA. This list is dynamically maintained and stored in the database.
- The TLS certificate is validated by checking the certificate revocation list signed by the TLS CRL signer. In the diagram in the previous section, it is given as a static file called
tls.crl.
Alternatively, the node operator may choose to use its own certificate revocation list infrastructure. However, this setup is out of the scope of the example scenario.
To generate all the artifacts of this scenario, a user needs to pass the correct configuration file to the PKI Tool. The following is the example of the configuration file that will result in generating the above certificate hierarchy:
defaultPassword = "password" keyStores = { "identity-manager-key-store" = { type = LOCAL file = "./key-stores/identity-manager-key-store.jks" } "network-map-key-store" = { type = LOCAL file = "./key-stores/network-map-key-store.jks" } "subordinate-key-store" = { type = LOCAL file = "./key-stores/subordinate-key-store.jks" } "root-key-store" = { type = LOCAL file = "./key-stores/root-key-store.jks" } "tls-crl-signer-key-store" = { type = LOCAL file = "./key-stores/tls-crl-signer-key-store.jks" } } certificatesStores = { "truststore" = { file = "./trust-stores/network-root-truststore.jks" } } certificates = { "cordatlscrlsigner" = { key = { type = LOCAL includeIn = ["tls-crl-signer-key-store"] } isSelfSigned = true subject = "CN=Test TLS Signer Certificate, OU=HQ, O=HoldCo LLC, L=New York, C=US" includeIn = ["truststore"] crl = { crlDistributionUrl = "" file = "./crl-files/tls.crl" indirectIssuer = true issuer = "CN=Test TLS Signer Certificate, OU=HQ, O=HoldCo LLC, L=New York, C=US" } }, "cordarootca" = { key = { type = LOCAL includeIn = ["root-key-store"] } isSelfSigned = true subject = "CN=Test Foundation Service Root Certificate, OU=HQ, O=HoldCo LLC, L=New York, C=US" includeIn = ["truststore"] crl = { crlDistributionUrl = "" file = "./crl-files/root.crl" } }, "cordasubordinateca" = { key = { type = LOCAL includeIn = ["subordinate-key-store"] } signedBy = "cordarootca" subject = "CN=Test Subordinate CA Certificate, OU=HQ, O=HoldCo LLC, L=New York, C=US" crl = { crlDistributionUrl = "" file = "./crl-files/subordinate.crl" } }, "cordaidentitymanagerca" = { key = { type = LOCAL includeIn = ["identity-manager-key-store"] } signedBy = "cordasubordinateca" subject = "CN=Test Identity Manager Service Certificate, OU=HQ, O=HoldCo LLC, L=New York, C=US" role = DOORMAN_CA }, "cordanetworkmap" = { key = { type = LOCAL includeIn = ["network-map-key-store"] } signedBy = "cordasubordinateca" issuesCertificates = false subject = "CN=Test Network Map Service Certificate, OU=HQ, O=HoldCo LLC, L=New York, C=US" role = NETWORK_MAP } }
To simplify things even more, the PKI Tool assumes default values as much as possible so the user is only required to provide only essential information to the tool. At the same time, the tool allows for overriding those defaults and have the configuration adjusted to the specific needs of different scenarios.. | https://docs.r3.com/en/platform/corda/1.4/cenm/pki-guide.html | 2021-10-16T12:22:32 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.r3.com |
Asset URLs screen
The URLs screen of an asset shows you the full URL of the asset. The URL of the site forms the base of the asset’s URL, as well as where in the hierarchy of the site the asset is, and the name it was given at its creation.
For example, a standard page called * Matrix* exists under the Squiz page in a Site. The site URL is. The URL of the Matrix page will be. Hyphens replace any spaces in the asset’s name by default. For example,.
The URLs screen also lets you set up remaps for the asset. The remap will take the user from an old URL to the current URL of the asset, ensuring they see the asset’s content rather than an error.
URLs
This section lets you change the URLs of the asset as well as view the current URL of the asset.
The fields available are as follows:
- Paths
The name entered to create the asset will appear in this field by default.
You can add paths by clicking into the second box and entering the new path. For example, we can add another path called
matrix. This ability means that the user can view this asset in the site by entering the URL or.
Each URL path can be up to 255 characters in length. An upper limit of 1000 characters is shared across all URL paths you set.
You can change paths by clicking into these fields and entering the new path for the asset. When you do this, a new remap will appear in the Remaps section.
In the previous example, if we change the path
hometo
matrix, a remap will appear in the Remaps section. This change means that if the user enters the URL, they will be redirected or taken to the URL.
- Current URLs
this section lists the URLs for the asset. Multiple URLs will be listed if the site has more than one URL, or the asset has more than one URLs.
- Automatically add remaps?
This field lets you prevent the creation of a remap when you change the path. This option is selected by default, meaning that remaps will be created. Deselect this option before you click Save to prevent remapping.
Remaps
This section lists all of the current remaps for the asset. It also lets you set up new remaps.
To set up a remap: enter the old URL into the Old URL field and select which URL to map to from the list provided in the New URL field.
For example, we can enter an old URL of and select from the new URL field shown in the previous figure.
This change means that if the user enters the URL, they are taken to the URL.
You can give the remap an expiry time by entering the number of days into the Expires field. The system deletes a remap when it expires, and the user will have to use the correct URL. The Remaps field lists assets with remaps set up.
The Never Delete option lets you manually specify the Never Delete setting of the newly created remap.
This option Never Delete, as configured on the remap manager.
To delete a remap, click on the Delete box for the remap and click Save. The Never Delete setting determines the availability of this option.
The system will not delete remaps marked as Never Delete. You must disable this setting on the remap manager to delete the remap.
For more information, refer to the Remap Manager chapter in the system management manual. | https://docs.squiz.net/matrix/version/latest/using/working-with-assets/asset-screens-urls.html | 2021-10-16T11:15:19 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['../_images/5-0-0_web-paths-section.png', 'URLs'], dtype=object)
array(['../_images/5-0-0_remaps-section.png', '5 0 0 remaps section'],
dtype=object)
array(['../_images/5-0-0_never-delete-field.png', 'Never Delete'],
dtype=object) ] | docs.squiz.net |
June 27, 2019 (GMP 11.4.3)
Enhancements
- Experience Google Chrome 70—Experience monitoring now uses Google Chrome 70 to interact with the web applications that you target. Upgrading Chrome ensures that AppNeta Monitoring Points stay up to date with performance and security improvements. In addition, the upgrade improves the resilience of some workflow scripts, and makes them even more representative of real end user experience. As with any Chrome upgrade, you may notice changes in the execution time of your Experience web path workflows – faster or slower – and the way time is attributed to network, server, or browser. In some cases, you may also see different or additional javascript errors. A purple diamond indicator will be placed under the End-User Experience and Milestone Breakdown charts on the Test Timeline page when Chrome is upgraded, so you will quickly be able to see whether it has had an affect on script execution.
- DNS monitoring capable—This release contains the enhancements necessary for Monitoring Points to provide DNS information to APM for DNS monitoring.
In addition, the release incorporates changes made in all EMP releases since the GMP 11.0.1 release. These include: | https://docs.appneta.com/release-notes/2019-06-27-gmp.html | 2021-10-16T12:11:07 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.appneta.com |
Russ Cox
19 January 2021
Today’s Go security release
fixes an issue involving PATH lookups in untrusted directories
that can lead to remote execution during the go get command.
We expect people to have questions about what exactly this means
and whether they might have issues in their own programs.
This post details the bug, the fixes we have applied,
how to decide whether your own programs are vulnerable to similar problems,
and what you can do if they are.
go
get
One of the design goals for the go command is that most commands – including
go build, go doc, go get, go install, and go list – do not run
arbitrary code downloaded from the internet.
There are a few obvious exceptions:
clearly go run, go test, and go generate do run arbitrary code – that's their job.
But the others must not, for a variety of reasons including reproducible builds and security.
So when go get can be tricked into executing arbitrary code, we consider that a security bug.
build
doc
install
list
run
test
generate
If go get must not run arbitrary code, then unfortunately that means
all the programs it invokes, such as compilers and version control systems, are also inside the security perimeter.
For example, we've had issues in the past in which clever use of obscure compiler features
or remote execution bugs in version control systems became remote execution bugs in Go.
(On that note, Go 1.16 aims to improve the situation by introducing a GOVCS setting
that allows configuration of exactly which version control systems are allowed and when.)
Today's bug, however, was entirely our fault, not a bug or obscure feature of gcc or git.
The bug involves how Go and other programs find other executables,
so we need to spend a little time looking at that before we can get to the details.
gcc
git
All operating systems have a concept of an executable path
($PATH on Unix, %PATH% on Windows; for simplicity, we'll just use the term PATH),
which is a list of directories.
When you type a command into a shell prompt,
the shell looks in each of the listed directories,
in turn, for an executable with the name you typed.
It runs the first one it finds, or it prints a message like “command not found.”
$PATH
%PATH%
On Unix, this idea first appeared in Seventh Edition Unix's Bourne shell (1979). The manual explained:.
:
:/bin:/usr/bin
Note the default: the current directory (denoted here by an empty string,
but let's call it “dot”)
is listed ahead of /bin and /usr/bin.
MS-DOS and then Windows chose to hard-code that behavior:
on those systems, dot is always searched first,
automatically, before considering any directories listed in %PATH%.
/bin
/usr/bin
As Grampp and Morris pointed out in their
classic paper “UNIX Operating System Security” (1984),
placing dot ahead of system directories in the PATH
means that if you cd into a directory and run ls,
you might get a malicious copy from that directory
instead of the system utility.
And if you can trick a system administrator to run ls in your home directory
while logged in as root, then you can run any code you want.
Because of this problem and others like it,
essentially all modern Unix distributions set a new user's default PATH
to exclude dot.
But Windows systems continue to search dot first, no matter what PATH says.
cd
ls
root
For example, when you type the command
go version
on a typically-configured Unix,
the shell runs a go executable from a system directory in your PATH.
But when you type that command on Windows,
cmd.exe checks dot first.
If .\go.exe (or .\go.bat or many other choices) exists,
cmd.exe runs that executable, not one from your PATH.
cmd.exe
.\go.exe
.\go.bat
For Go, PATH searches are handled by exec.LookPath,
called automatically by
exec.Command.
And to fit well into the host system, Go's exec.LookPath
implements the Unix rules on Unix and the Windows rules on Windows.
For example, this command
exec.LookPath
exec.Command
out, err := exec.Command("go", "version").CombinedOutput()
behaves the same as typing go version into the operating system shell.
On Windows, it runs .\go.exe when that exists.
version
(It is worth noting that Windows PowerShell changed this behavior,
dropping the implicit search of dot, but cmd.exe and the
Windows C library SearchPath function
continue to behave as they always have.
Go continues to match cmd.exe.)
SearchPath function
When go get downloads and builds a package that contains
import "C", it runs a program called cgo to prepare the Go
equivalent of the relevant C code.
The go command runs cgo in the directory containing the package sources.
Once cgo has generated its Go output files,
the go command itself invokes the Go compiler
on the generated Go files
and the host C compiler (gcc or clang)
to build any C sources included with the package.
All this works well.
But where does the go command find the host C compiler?
It looks in the PATH, of course. Luckily, while it runs the C compiler
in the package source directory, it does the PATH lookup
from the original directory where the go command was invoked:
import
"C"
cgo
clang
cmd := exec.Command("gcc", "file.c")
cmd.Dir = "badpkg"
cmd.Run()
So even if badpkg\gcc.exe exists on a Windows system,
this code snippet will not find it.
The lookup that happens in exec.Command does not know
about the badpkg directory.
badpkg\gcc.exe
badpkg
The go command uses similar code to invoke cgo,
and in that case there's not even a path lookup,
because cgo always comes from GOROOT:
cmd := exec.Command(GOROOT+"/pkg/tool/"+GOOS_GOARCH+"/cgo", "file.go")
cmd.Dir = "badpkg"
cmd.Run()
This is even safer than the previous snippet:
there's no chance of running any bad cgo.exe that may exist.
cgo.exe
But it turns out that cgo itself also invokes the host C compiler,
on some temporary files it creates, meaning it executes this code itself:
// running in cgo in badpkg dir
cmd := exec.Command("gcc", "tmpfile.c")
cmd.Run()
Now, because cgo itself is running in badpkg,
not in the directory where the go command was run,
it will run badpkg\gcc.exe if that file exists,
instead of finding the system gcc.
So an attacker can create a malicious package that uses cgo and
includes a gcc.exe, and then any Windows user
that runs go get to download and build the attacker's package
will run the attacker-supplied gcc.exe in preference to any
gcc in the system path.
gcc.exe
Unix systems avoid the problem first because dot is typically not
in the PATH and second because module unpacking does not
set execute bits on the files it writes.
But Unix users who have dot ahead of system directories
in their PATH and are using GOPATH mode would be as susceptible
as Windows users.
(If that describes you, today is a good day to remove dot from your path
and to start using Go modules.)
(Thanks to RyotaK for reporting this issue to us.)
It's obviously unacceptable for the go get command to download
and run a malicious gcc.exe.
But what's the actual mistake that allows that?
And then what's the fix?
One possible answer is that the mistake is that cgo does the search for the host C compiler
in the untrusted source directory instead of in the directory where the go command
was invoked.
If that's the mistake,
then the fix is to change the go command to pass cgo the full path to the
host C compiler, so that cgo need not do a PATH lookup in
to the untrusted directory.
Another possible answer is that the mistake is to look in dot
during PATH lookups, whether happens automatically on Windows
or because of an explicit PATH entry on a Unix system.
A user may want to look in dot to find a command they typed
in a console or shell window,
but it's unlikely they also want to look there to find a subprocess of a subprocess
of a typed command.
If that's the mistake,
then the fix is to change the cgo command not to look in dot during a PATH lookup.
We decided both were mistakes, so we applied both fixes.
The go command now passes the full host C compiler path to cgo.
On top of that, cgo, go, and every other command in the Go distribution
now use a variant of the os/exec package that reports an error if it would
have previously used an executable from dot.
The packages go/build and go/import use the same policy for
their invocation of the go command and other tools.
This should shut the door on any similar security problems that may be lurking.
os/exec
go/build
go/import
Out of an abundance of caution, we also made a similar fix in
commands like goimports and gopls,
as well as the libraries
golang.org/x/tools/go/analysis
and
golang.org/x/tools/go/packages,
which invoke the go command as a subprocess.
If you run these programs in untrusted directories –
for example, if you git checkout untrusted repositories
and cd into them and then run programs like these,
and you use Windows or use Unix with dot in your PATH –
then you should update your copies of these commands too.
If the only untrusted directories on your computer
are the ones in the module cache managed by go get,
then you only need the new Go release.
goimports
gopls
golang.org/x/tools/go/analysis
golang.org/x/tools/go/packages
After updating to the new Go release, you can update to the latest gopls by using:
GO111MODULE=on \
go get golang.org/x/tools/[email protected]
and you can update to the latest goimports or other tools by using:
GO111MODULE=on \
go get golang.org/x/tools/cmd/[email protected]
You can update programs that depend on golang.org/x/tools/go/packages,
even before their authors do,
by adding an explicit upgrade of the dependency during go get:
GO111MODULE=on \
go get example.com/cmd/thecmd golang.org/x/[email protected]
For programs that use go/build, it is sufficient for you to recompile them
using the updated Go release.
Again, you only need to update these other programs if you
are a Windows user or a Unix user with dot in the PATH
and you run these programs in source directories you do not trust
that may contain malicious programs.
If you use exec.LookPath or exec.Command in your own programs,
you only need to be concerned if you (or your users) run your program
in a directory with untrusted contents.
If so, then a subprocess could be started using an executable
from dot instead of from a system directory.
(Again, using an executable from dot happens always on Windows
and only with uncommon PATH settings on Unix.)
If you are concerned, then we've published the more restricted variant
of os/exec as golang.org/x/sys/execabs.
You can use it in your program by simply replacing
golang.org/x/sys/execabs
import "os/exec"
with
import exec "golang.org/x/sys/execabs"
and recompiling.
We have been discussing on
golang.org/issue/38736
whether the Windows behavior of always preferring the current directory
in PATH lookups (during exec.Command and exec.LookPath)
should be changed.
The argument in favor of the change is that it closes the kinds of
security problems discussed in this blog post.
A supporting argument is that although the Windows SearchPath API
and cmd.exe still always search the current directory,
PowerShell, the successor to cmd.exe, does not,
an apparent recognition that the original behavior was a mistake.
The argument against the change is that it could break existing Windows
programs that intend to find programs in the current directory.
We don’t know how many such programs exist,
but they would get unexplained failures if the PATH lookups
started skipping the current directory entirely.
SearchPath
The approach we have taken in golang.org/x/sys/execabs may
be a reasonable middle ground.
It finds the result of the old PATH lookup and then returns a
clear error rather than use a result from the current directory.
The error returned from exec.Command("prog") when prog.exe exists looks like:
exec.Command("prog")
prog.exe
prog resolves to executable in current directory (.\prog.exe)
For programs that do change behavior, this error should make very clear what has happened.
Programs that intend to run a program from the current directory can use
exec.Command("./prog") instead (that syntax works on all systems, even Windows).
exec.Command("./prog")
We have filed this idea as a new proposal, golang.org/issue/43724. | http://docs.studygolang.com/blog/path-security | 2021-10-16T11:40:25 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.studygolang.com |
Connect a bot to Slack
APPLIES TO: SDK v4
This article shows how to add a Slack channel to a bot using one the following approaches:
- Create a Slack application using the Azure portal. It describes how to connect your bot to Slack using the Azure portal.
- Create a Slack application using the Slack adapter. It describes how to connect your bot to Slack using the adapter.
Create a Slack application using the Azure portal
Prerequisites
A bot deployed to Azure. Refer to Create a bot with the Bot Framework SDK and Deploy a basic bot.
Access to a Slack workspace with sufficient permissions to create and manage applications at. If you do not have access to a Slack environment you can create a workspace.
Create a Slack application.
Select Create App.
Add a new redirect URL
In the left pane, select OAuth & Permissions.
In the right pane, select Add a new Redirect URL.
In the input box, enter.
Select Add.
Select Save URLs.
Follow these steps to subscribe to six specific bot events. By subscribing to bot events, your app will be notified of user activities at the URL you specify.
In the left pane, select Event Subscriptions.
In the right pane, set Enable Events to On.
In Request URL, enter{YourBotHandle}, where
{YourBotHandle}is your bot handle, without the braces. This handle is the name you specified when deploying the bot to Azure. You can find it by going to the Azure portal.
In Subscribe to Bot Events, click Add Bot User Event.
In the list of events, select these six event types:
member_joined_channel
member_left_channel
message.channels
message.groups
message.im
message.mpim
Select Save Changes.
As you add events in Slack, it lists the scopes you need to request. The scopes you need will depend on the events you subscribe to and how you intend to respond to them. For Slack supported scopes, refer to Scopes and permissions. See also Understanding OAuth scopes for Bots.
Note
As of June 2020 Slack channel supports Slack V2 permission scopes which allow the bot to specify its capabilities and permissions in a more granular way. All newly configured Slack channels will use the V2 scopes. To switch your bot to the V2 scopes, delete and recreate the Slack channel configuration in the Azure portal Channels blade.
Enable sending messages to the bot by the users
In the left pane, select App Home.
In the right pane, in the Show Tabs section under the Messages Tab, check Allow users to send Slash commands and messages from the messages tab.
Add and configure interactive messages (optional)
- In the left pane, select Interactivity & Shortcuts.
- In the right pane, enter the Request URL.
- Select Save changes.
Configure your bot Slack channel
Two steps are rquired to configure your bot Slack channel. First you gather the Slack application credentials, then you use these credenatials to configure the Slcak channel in Azure.
Gather Slack app credentials
In the left pane, select Basic Information.
In the right pane, scroll to the App Credentials section. The Client ID, Client Secret, and Signing Secret required for configuring your Slack bot channel are displayed. Copy and store these credentials in a safe place.
Configure Slack channel in Azure
Select your Azure bot resource in the Azure portal.
In the left panel, select Channels,
In the right panel, select the Slack icon.
Paste the Slack app credentials you saved in the previous steps into the appropriate fields.
The Landing Page URL is optional. You may omit or change it.
Select Save. Follow the instructions to authorize your Slack app's access to your Development Slack Team.
On the Configure Slack page, confirm that the slider by the Save button is set to Enabled. Your bot is now configured to communicate with the users in Slack.
Create the steps below to get the replacement URL.
- Go to the Azure portal.
- Select your Azure bot resource.
- In the left pane, select Channels.
- In the right pane, right-click on the Slack channel name.
- In the drop-down menu, select Copy link.
- Paste this URL from your clipboard into the HTML provided for the Slack button.
Authorized users can click the Add to Slack button provided by this modified HTML to reach your bot on Slack.
Note
The link you pasted into the href value of the HTML contains scopes that can be refined as needed. See Scopes and permissions for the full list of available scopes.
Create a Slack application using the Slack adapter
As well as the channel available in the Azure Bot Service to connect your bot with Slack, you can also use the Slack adapter. In this article you will learn how to connect a bot to Slack using the adapter. This article will walk you through modifying the EchoBot sample to connect it to a Slack app.
Note
The instructions below cover the C# implementation of the Slack adapter. For instructions on using the JS adapter, part of the BotKit libraries, see the BotKit Slack documentation.
Adapter prerequisites
- The EchoBot C# sample code. You can use the sample slack adapter as an alternative to the modification of the echo bot sample, shown in the section Wiring up the Slack adapter in your bot.
- Access to a Slack workspace with sufficient permissions to create and manage applications at. If you do not have access to a Slack environment you can create a workspace for free.
Create a Slack application when using the adapter.
Gather required configuration settings for your bot
Once your app is created, collect the following information. You will need this to connect your bot to Slack.
In the left pane, select Basic Information.
In the right pane, scroll to the App Credentials section. The Verification Token, Client Secret, and Signing Secret required for configuring your Slack bot channel are displayed. Copy and store these credentials in a safe place.
Navigate to the Install App page under the Settings menu and follow the instructions to install your app into a Slack team. Once installed, copy the Bot User OAuth Access Token and, again, keep this for later to configure your bot settings.
Wiring up the Slack adapter in your bot
If you use the sample slack adapter, as an alternative to the modification of the echo bot sample, go directly to Complete configuration of your Slack app.
Install the Slack adapter NuGet package
Add the Microsoft.Bot.Builder.Adapters.Slack NuGet package. For more information on using NuGet, see Install and manage packages in Visual Studio
Create a Slack adapter class
Create a new class that inherits from the SlackAdapter class. This class will act as our adapter for the Slack channel and include error handling capabilities (similar to the BotFrameworkAdapterWithErrorHandler class already in the sample, used for handling other requests from Azure Bot Service).
public class SlackAdapterWithErrorHandler : SlackAdapter { public Slack Slack requests
We create a new controller which will handle requests from your slack app, on a new endpoint
api/slack instead of the default
api/messages used for requests from Azure Bot Service Channels. By adding an additional endpoint to your bot, you can accept requests from Bot Service channels, as well as from Slack, using the same bot.
[Route("api/slack")] [ApiController] public class SlackController : ControllerBase { private readonly SlackAdapter _adapter; private readonly IBot _bot; public SlackController(SlackAdapter adapter, IBot bot) { _adapter = adapter; _bot = bot; } [HttpPost] [HttpGet] public async Task PostAsync() { // Delegate the processing of the HTTP POST to the adapter. // The adapter will invoke the bot. await _adapter.ProcessAsync(Request, Response, _bot); } }
Add Slack app settings to your bot's configuration file
Add the 3 settings shown below to your appSettings.json file in your bot project, populating each one with the values gathered earlier when creating your Slack app.
"SlackVerificationToken": "", "SlackBotToken": "", "SlackClientSigningSecret": ""
Inject the Slack adapter In your bot startup.cs
Add the following line to the ConfigureServices method within your startup.cs file. This will register your Slack adapter and make it available for your new controller class. The configuration settings you added in the previous step will be automatically used by the adapter.
services.AddSingleton<SlackAdapter, SlackAdapterWithErrorHandler>();
Once added, your ConfigureServices method should Slack Adapter services.AddSingleton<SlackAdapter, SlackAdapterWithErrorHandler>(); // Create the bot as a transient. In this case the ASP Controller is expecting an IBot. services.AddTransient<IBot, EchoBot>(); }
Complete configuration of your Slack app
Obtain a URL for your bot
This section shows how to point your Slack app to the correct endpoint on your bot and how to subscribe the app to bot events to ensure your bot receives the related messages. To do this your bot must be running, so that Slack can verify the endpoint is valid. Also, for testing purposes, this section assumes that the bot is running locally on your machine and your Slack app communicates with it using ngrok.
Note
If you are not ready to deploy your bot to Azure, or wish to debug your bot when using the Slack command assumes your local bot is running on port 3978, otherwise change the port number to the correct value.
ngrok.exe http 3978 -host-header="localhost:3978"
Update your Slack app
Navigate back to the Slack API dashboard and select your app. You now need to configure 2 URLs for your app and subscribe to the appropriate events.
In the left pane, select OAuth & Permissions. In the right pane, enter the Redirect URL vlaue. It is your bot's URL, plus the
api/slackendpoint you specified in your newly created controller. For example,.
In the left pane, select Subscribe to Bot Events. In the right pane, enable events using the toggle at the top of the page. Then fill in the Request URL with the same URL you used in step 1.
Select Subsribe to bot events. Select Add Bot User Event. In the list of events, select these six event types:
member_joined_channel
member_left_channel
message.channels
message.groups
message.im
message.mpim
Select Save Changes.
Enable sending messages to the bot by the users
In the left pane, select App Home.
In the right pane, in the Show Tabs section under the Messages Tab, check Allow users to send Slash commands and messages from the messages tab.
Test your application in Slack
Log in the Slack work space where you installed your app (
http://<your work space>-group.slack.com/). You will see it listed under the Apps section in the left panel.
In the left panel, select your app.
In the right panel, write e message and send it to the application. If you used an echo bot, the application echoes back the message as shown in the figure below.
You can also test this feature using the sample bot for the Slack adapter by populating the appSettings.json file with the same values described in the steps above. This sample has additional steps described in the
README file to show examples of link sharing, receiving attachments, and sending interactive messages. | https://docs.microsoft.com/en-us/azure/bot-service/bot-service-channel-connect-slack?view=azure-bot-service-4.0 | 2021-10-16T13:41:52 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.microsoft.com |
Having a full understanding of the differences in permissions for an Account Owner, Admin, and User will allow all parties to make proper use of the Relay app and avoid potential confusion.
Key Notes to Remember:
All aspects of communication will remain the same whether you're an Owner, Admin, or User
The permissions/access for Users cannot be upgraded to match that of the account Owner/Admin nor downgraded to prevent access to Location data
If you wish to grant an Account User with Account Admin access, you would have to remove the user from your account and then reinvite them as an Account Admin. The same situation applies if you wish to downgrade the permissions for an Account Admin, you would have to remove their profile and reinvite them as an Account User.
Account Admins will not be able to edit/delete the profile of the Account Owner
List of Permissions
Owner/Amin Permissions
The Owner and Admins of the account will have access to the full suite of features available in the Relay app. They will have the ability to edit and manage Group Chats, Relay device settings, Interactive Channels, Geofences, other profiles on the account, along with activating Relays (Owners only) and inviting new Admins and Users.
Any app profile that was previously created by logging in with the account owner's credentials will retain the same permissions as the account owner. If an app profile was created by sending an invite to an individual, then they will be relegated to the permissions of a User.
*Note: As of 12/16/19 the ability to create a NEW app profile via sharing one's account credentials is no longer an option; one must send an invite to add a new Admin or User to their account.
User Permissions
Users or individuals who were sent an invite to join an account will have a more limited feature set in the app. Users are unable to manage or adjust anything other than their own app profile; however, they can still view location tracking and geofence information.
*Note: Users cannot authorize any transactions with support and will NOT be privy to any account-related information outside of the information that they can see in their app. | https://docs.relaygo.com/en/articles/3569221-roles-within-the-relay-app | 2021-10-16T11:48:55 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['https://downloads.intercomcdn.com/i/o/183702958/1ff8179eb31c70fcc9d1dc1a/Admin+Roles.png',
None], dtype=object) ] | docs.relaygo.com |
Source code for globus_sdk.services.auth.flow_managers.authorization_code
import logging import urllib.parse from typing import TYPE_CHECKING, Any, Dict, Iterable, Optional, Union from globus_sdk import utils from ..oauth2_constants import DEFAULT_REQUESTED_SCOPES from ..response import OAuthTokenResponse from .base import GlobusOAuthFlowManager if TYPE_CHECKING: import globus_sdk logger = logging.getLogger(__name__)[docs]class GlobusAuthorizationCodeFlowManager(GlobusOAuthFlowManager): """ This is the OAuth flow designated for use by Clients wishing to authenticate users in a web application backed by a server-side component (e.g. an API). The key constraint is that there is a server-side system that can keep a Client Secret without exposing it to the web client. For example, a Django application can rely on the webserver to own the secret, so long as it doesn't embed it in any of the pages it generates. The application sends the user to get a temporary credential (an ``auth_code``) associated with its Client ID. It then exchanges that temporary credential for a token, protecting the exchange with its Client Secret (to prove that it really is the application that the user just authorized). :param auth_client: The ``AuthClient`` used to extract default values for the flow, and also to make calls to the Auth service. :type auth_client: :class:`ConfidentialAppAuthClient \ <globus_sdk.ConfidentialAppAuthClient>` :param redirect_uri: The page that users should be directed to after authenticating at the authorize URL. :type redirect_uri: str :param requested_scopes: The scopes on the token(s) being requested, as a space-separated string or iterable of strings. Defaults to ``openid profile email urn:globus:auth:scope:transfer.api.globus.org:all`` (that is, ``DEFAULT_REQUESTED_SCOPES`` from ``globus_sdk.services.auth.oauth2_constants``) """ def __init__( self, auth_client: "globus_sdk.AuthClient", redirect_uri: str, requested_scopes: Optional[Union[str, Iterable[str]]] = None, state: str = "_default", refresh_tokens: bool = False, ): # default to the default requested scopes self.requested_scopes = requested_scopes or DEFAULT_REQUESTED_SCOPES # convert scopes iterable to string immediately on load if not isinstance(self.requested_scopes, str): self.requested_scopes = " ".join(self.requested_scopes) # store the remaining parameters directly, with no transformation self.client_id = auth_client.client_id self.auth_client = auth_client self.redirect_uri = redirect_uri self.refresh_tokens = refresh_tokens self.state = state logger.debug("Starting Authorization Code Flow with params:") logger.debug(f"auth_client.client_id={auth_client.client_id}") logger.debug(f"redirect_uri={redirect_uri}") logger.debug(f"refresh_tokens={refresh_tokens}") logger.debug(f"state={state}") logger.debug(f"requested_scopes={self.requested_scopes}")[docs] def exchange_code_for_tokens(self, auth_code: str) -> OAuthTokenResponse: """ The second step of the Authorization Code flow, exchange an authorization code for access tokens (and refresh tokens if specified) :rtype: :class:`OAuthTokenResponse <.OAuthTokenResponse>` """ logger.debug( "Performing Authorization Code auth_code exchange. " "Sending client_id and client_secret" ) return self.auth_client.oauth2_token( { "grant_type": "authorization_code", "code": auth_code.encode("utf-8"), "redirect_uri": self.redirect_uri, } ) | https://globus-sdk-python.readthedocs.io/en/stable/_modules/globus_sdk/services/auth/flow_managers/authorization_code.html | 2021-10-16T12:35:15 | CC-MAIN-2021-43 | 1634323584567.81 | [] | globus-sdk-python.readthedocs.io |
Desktop GUI¶
ProjPicker provides two GUIs written in wxPython and tkinter. The wxPython-based GUI looks and feels native on any platform, but the dependency module must be installed using:
pip install wxPython
If wxPython is not installed, the GUI falls back to the tkinter-based GUI. The tkinter module is a part of the Python standard library.
It can be accessed with
projpicker -g
and appears on Linux as
OpenStreetMap tiling¶
The GUI utilizes GetOSM for OpenStreeMap tile fetching and visualization. No JavaScript is needed! While GetOSM was initially created as a part of ProjPicker, it is now an independent package that can be installed from PyPI.
Geometry drawing¶
Geometry can be drawn over the OpenStreetMap tiles and are added to the query builder. Supported geometries are point (points), poly (polygons), and bbox (bounding boxes).
Query builder¶
The ProjPicker GUI helps construct the query syntax with the provided query builder. It allows for custom queries to be created from drawn geometries in addition to editing and writing one’s own queries with ProjPicker’s flexible syntax.
Import / export queries¶
The GUI allows for the import and export of ProjPicker queries saved as a .ppik file. The file format is a plaintext format and can be edited both within the GUI and through other conventional text editors. Import or export can be chosen by right clicking on the query builder.
Searching¶
CRS IDs or any string fields in the CRS info tab can be searched for using the search box below the CRS list. Multiple words can be searched for by separating them with a semicolon, in which case all the words must be found in any string fields in CRSs. Search words are case-insensitive. | https://projpicker.readthedocs.io/en/latest/getting_started/desktop_gui.html | 2021-10-16T12:24:17 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['../_images/desktop_gui.png', 'ProjPicker desktop GUI'],
dtype=object) ] | projpicker.readthedocs.io |
Changelog for package flexbe_testing
1.3.1 (2020-12-11)
1.3.0 (2020-11-19)
Merge pull request
#132
from LoyVanBeek/feature/test_require_launch_file_success Optionally fail test if launch-file fails
Clear up logging for exiting processes
Check all launched nodes have exited And check they exit code to decide success/fail of launch file
Optionally fail test if launch-file fails If the launch-file in a FlexBE test fails, signal this to the TestContext that can then optionally fail due to this This allows to run scripts etc to verify State/Behavior side effects
[flexbe_core] [flexbe_testing] [flexbe_widget] Use yaml backwards compatible
Merge remote-tracking branch 'origin/feature/core_rework' into develop # Conflicts: # flexbe_core/src/flexbe_core/core/operatable_state_machine.py # flexbe_onboard/src/flexbe_onboard/flexbe_onboard.py
Add support for python3
[flexbe_testing] Add a behavior to the self-test
[flexbe_testing] Fix check of userdata output
Major clean-up of most core components
Remove smach dependency
Contributors: Loy van Beek, Philipp Schillinger
1.2.5 (2020-06-14)
Merge branch 'develop' into feature/state_logger_rework
Contributors: Philipp Schillinger
1.2.4 (2020-03-25)
Merge pull request
#109
from Achllle/feature/testing/timeout_parameter Expose time-limit parameter from rostest
Merge pull request
#108
from Achllle/fix/test_bagfile_topic Retry reading bag file messages without backslash in unit tests
Expose time-limit parameter from rostest
Ignore topic backslash when no messages are found that way
Merge branch 'fmessmer-feature/python3_compatibility' into develop
Remove explicit list construction where not required
python3 compatibility via 2to3
Contributors: Achille, Philipp Schillinger, fmessmer
1.2.3 (2020-01-10)
Merge pull request
#97
from team-vigir/feature/test_behaviors flexbe_testing support for behaviors
[flexbe_testing] Remove deprecated state tester
[flexbe_testing] Allow specification of behavior name in test config
[flexbe_testing] Add support for behavior tests
Merge remote-tracking branch 'origin/develop' into feature/test_behaviors # Conflicts: # flexbe_testing/bin/testing_node # flexbe_testing/src/flexbe_testing/state_tester.py
Merge pull request
#94
from LoyVanBeek/feature/check_missing_testfile Fail test if provided test-yaml-file does not exist
Generate test results also when test file is missing
Move test configuration files to StateTester so failing to configure can also be included in the test results
Fail test if provided test-file does not exist There is no way to know which arguments are intended to be filenames, so best we can do is guess?
[flexbe_testing] Refactor testing framework as basis for new feature
Contributors: Loy van Beek, Philipp Schillinger
1.2.2 (2019-09-16) | http://docs.ros.org/en/noetic/changelogs/flexbe_testing/changelog.html | 2021-10-16T12:55:49 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.ros.org |
API Gateway 7.6.2 Kerberos Integration Guide Save PDF Selected topic Selected topic and subtopics All content Use KPS to store passwords for Kerberos authentication Kerberos authentication in API Gateway relies on keeping API Gateway in sync with Active Directory. If a password changes in Active Directory, it must also be updated in API Gateway. For example, you might have an Active Directory password policy where all passwords must change every 60 days and passwords that never change are not allowed. In this case, the updates to API Gateway are frequent, so it is important that they can be done easily, quickly, and without downtime. To achieve this, you can use a Key Property Store (KPS) to store passwords for both Kerberos clients and Kerberos services. A KPS is a table of data that policies running on API Gateway can reference as needed using selectors. You can view, populate, and update the data in KPS tables using API Gateway Manager. When a password is changed in Active Directory, you can update the password in the KPS at runtime, instead of redeploying the API Gateway configuration, or restarting API Gateway. Configure a KPS table for Kerberos passwords Populate data to the KPS table Update your Kerberos configuration to use the KPS table For more details on KPS tables, see the API Gateway Key Property Store User Guide. Configure a KPS table for Kerberos passwords This section describes how to configure a KPS table for storing passwords in Policy Studio. For more information on working in Policy Studio, see the API Gateway Policy Developer Guide. KPS tables are stored in KPS collections, so you must have a KPS collection to which you add the new KPS table. You can use an existing KPS collection, or create a new collection called, for example, Passwords with no collection alias prefix. For more details on how to configure a KPS collection, see the API Gateway Key Property Store User Guide. In the node tree, select the KPS collection you want to use, and click Add Table. Enter a name for your table (for example, Passwords). Click Add, enter an alias (such as Kerberos), and click OK. In the KPS table you created, go to the Structure tab, and click Add to add a field to the table. Set the following, and click OK: Name: name Type: java.lang.String Click Add, set the following, and click OK: Name: password Type: java.lang.String Select Primary Key for the field name and Encrypted for the field password. Click Save in the top right corner to save the configuration, and deploy the configuration to API Gateway. Populate data to the KPS table Use API Gateway Manager to populate the KPS table with entries for Kerberos principals, and to update the data when it changes. For more information on working in API Gateway Manager, see the API Gateway Administrator Guide. Log in to the API Gateway Manager, click Settings > Key Property Stores. Select the KPS table you created (Passwords), and select Actions > New Entry to add a Kerberos principal to the KPS table. In the name field, enter the Kerberos principal's user name in the Active Directory. In the password field, enter the Kerberos principal's password from Active directory, and click Save. Create an entry and fill in the user name and password from Active Directory for each of your Kerberos principals. You now have a KPS table storing the passwords for Kerberos authentication. To update the details in the table when a Kerberos principal's password changes in Active Directory, log in to API Gateway Manager, select your KPS table, select the principal you want to edit, and update the password to match Active Directory. Update your Kerberos configuration to use the KPS table You must update the Kerberos clients and Kerberos services to use selectors in the password field to pick up the actual passwords from the KPS table. In the Policy Studio node tree, click Environment Configuration > External Connections > Kerberos Clients. Select the Kerberos client you want, and click Edit. Select Wildcard Password, and enter the following: ${kps.<KPS table alias>['<name>'].password} For example: ${kps.Kerberos['TrustedGateway'].password} Here, TrustedGateway is the value of the name field in the KPS table, and thus the user name of the Kerberos principal in the Active Directory ([email protected]). The name field is used as the primary key for the KPS table. Repeat these steps for all your Kerberos clients you want to use the KPS table. In the node tree, click Environment Configuration > External Connections > Kerberos services, and repeat the steps for all your Kerberos services you want to use the KPS table. Deploy the configuration to API Gateway. Related Links | https://docs.axway.com/bundle/APIGateway_762_IntegrationKerberos_allOS_en_HTML5/page/Content/KerberosIntegration/kerberos_kps.htm | 2020-05-25T04:42:18 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.axway.com |
VMCaptureJob
The VMCaptureJob name space contains commands for capturing Virtual Center virtual machines, saving them in OVF (Open Virtualization Format) format, then deploying them as templates on other Virtual Centers, and as Virtual Guest Packages within the BMC Server Automation system.
This name space contains the following commands:
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/blcli/87/vmcapturejob-595584003.html | 2020-05-25T06:22:29 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.bmc.com |
Release Management¶
The Release Manager is responsible for shepherding the release process to successful completion. This document describes their responsibilities. Some items must be done by people that have special privileges to do specific tasks (e.g. privileges to access the production apt server), but even if the Release Manager does not have those privileges, they should coordinate with the person that does to make sure the task is completed.
Pre-Release¶
Open a Release SecureDrop 1.x.y issue to track release-related activity. Keep this issue updated as you proceed through the release process for transparency.
Check if there is a new stable release of Tor that can be QAed and released as part of the SecureDrop release. If so, file an issue.
Check if a release candidate for the Tails release is prepared. If so, request people participating in QA to use the latest release candidate.
Ensure that a pre-release announcement is prepared and shared with the community for feedback. Once the announcement is ready, coordinate with other team members to send them to current administrators, post on the SecureDrop blog, and tweet out a link.
For a regular release for version 1.x.0, branch off
develop:
git checkout develop git checkout -b release/1.x
Warning
For new branches, please ask a
freedomofpressorganization administrator to enable branch protection on the release branch. We want to require CI to be passing as well as at least one approving review prior to merging into the release branch.
For each release candidate, update the version and changelog. Collect a list of the important changes in the release, including their GitHub issues or PR numbers, then run the
update_version.shscript, passing it the new version in the format
major.minor.patch~rcN, e.g.:
securedrop/bin/dev-shell ../update_version.sh 1.3.0~rc1
The script will open both the main repository changelog (
changelog.md) and the one used for Debian packaging in an editor, giving you a chance to add the changes you collected. In the Debian changelog, we typically just refer the reader to the
changelog.mdfile.
If you would like to sign the release commit, you will need to do so manually:
Create a new signed commit and verify the signature:
git reset HEAD~1 git commit -aS git log --show-signature
Ensure the new commit is signed, take note of the commit hash.
Edit
1.x.y-rcN.tagand replace the commit hash with the new (signed) commit hash.
Delete the old tag and create a new one based on the tag file edited above:
git tag -d 1.x.y-rcN git mktag < 1.x.y-rcN.tag > .git/refs/tags/1.x.y-rcN
Push the branch and tags:
- For
1.x.y~rc1, push the
release/1.x.ybranch and
1.x.y-rc1tag directly.
- For subsequent release candidates and the final release version, issue a PR with changelog and version changes into the
release/1.x.ybranch, and push the signed tag once the PR is merged.
Build Debian packages and place them on
apt-test.freedom.press. This is currently done by making a PR into a git-lfs repo here. Only commit packages with an incremented version number: do not clobber existing packages. That is, if there is already a deb called e.g.
ossec-agent-3.6.0-amd64.debin
master, do not commit a new version of this deb. Changes merged to
masterin this repo will be published within 15 minutes.
Note
If the release contains other packages not created by
make build-debs, such as Tor or kernel updates, make sure that they also get pushed to
apt-test.freedom.press.
Build logs from the above debian package builds should be saved and published according to the build log guidelines.
Write a test plan that focuses on the new functionality introduced in the release. Post for feedback and make changes based on suggestions from the community.
Encourage QA participants to QA the release on production VMs and hardware. They should post their QA reports in the release issue such that it is clear what was and what was not tested. It is the responsibility of the release manager to ensure that sufficient QA is done on the release candidate prior to final release.
Triage bugs as they are reported. If a bug must be fixed before the release, it’s the release manager’s responsibility to either fix it or find someone who can.
Backport release QA fixes merged into
developinto the release branch using
git cherry-pick -x <commit>to clearly indicate where the commit originated from.
At your discretion – for example when a significant fix is merged – prepare additional release candidates and have fresh Debian packages prepared for testing.
For a regular release, the string freeze will be declared by the translation administrator one week prior to the release. After this is done, ensure that no changes involving string changes are backported into the release branch.
Ensure that a draft of the release notes are prepared and shared with the community for feedback.
Release Process¶
If this is a regular release, work with the translation administrator responsible for this release cycle to review and merge the final translations and screenshots (if necessary) they prepare. Refer to the i18n documentation for more information about the i18n release process. Note that you must manually inspect each line in the diff to ensure no malicious content is introduced.
Prepare the final release commit and tag. Do not push the tag file.
Step through the signing ceremony for the tag file. If you do not have permissions to do so, coordinate with someone that does.
Once the tag is signed, append the detached signature to the unsigned tag:
cat 1.x.y.tag.sig >> 1.x.y.tag
Delete the original unsigned tag:
git tag -d 1.x.y
Make the signed tag:
git mktag < 1.x.y.tag > .git/refs/tags/1.x.y
Verify the signed tag:
git tag -v 1.x.y
Push the signed tag:
git push origin 1.x.y
Ensure there are no local changes (whether tracked, untracked or git ignored) prior to building the debs. If you did not freshly clone the repository, you can use git clean:
Dry run (it will list the files/folders that will be deleted):
git clean -ndfx
Actually delete the files:
git clean -dfx
Build Debian packages:
- Verify and check out the signed tag for the release.
- Build the packages with
make build-debs.
- Build logs should be saved and published according to the build log guidelines.
Step through the signing ceremony for the
Releasefile(s) (there may be multiple if Tor is also updated along with the SecureDrop release).
Coordinate with the Infrastructure team to put signed Debian packages on
apt-qa.freedom.press:
- If the release includes a Tor update, make sure to include the new Tor Debian packages.
- If the release includes a kernel update, make sure to add the corresponding grsecurity-patched kernel packages, including both
linux-image-*and
linux-firmware-image-*packages as appropriate.
Coordinate with one or more team members to confirm a successful clean install in production VMs using the packages on
apt-qa.freedom.press.
Ask Infrastructure to perform the DNS cutover to switch
apt-qa.freedom.pressto
apt.freedom.press. Once complete, the release is live.
Make sure that the default branch of documentation is being built off the tip of the release branch. Building from the branch instead of a given tag enables us to more easily add documentation changes after release. You should:
- Log into readthedocs.
- Navigate to Projects → securedrop → Versions → Inactive Versions → release/branch → Edit.
- Mark the branch as Active by checking the box and save your changes. This will kick off a new docs build.
- Once the documentation has built, it will appear in the version selector at the bottom of the column of the.
- Now set this new release as default by navigating to Admin → Advanced Settings → Global Settings → Default Version.
release/branchfrom the dropdown menu and save the changes.
- Verify that docs.securedrop.org redirects users to the documentation built from the release branch.
Create a release on GitHub with a brief summary of the changes in this release.
Make sure that release notes are written and posted on the SecureDrop blog.
Make sure that the release is announced from the SecureDrop Twitter account.
Make sure that members of the support portal are notified about the release.
Update the upgrade testing boxes following this process: Updating the base boxes used for upgrade testing.
Post-Release¶
After the release, carefully monitor the FPF support portal (or ask those that have access to monitor) and SecureDrop community support forum for any issues that users are having.
Finally, in a PR back to develop, cherry-pick the release commits (thus ensuring a consistent changelog in the future) and bump the version numbers in preparation for the next release (this is required for the upgrade testing scenario). | https://docs.securedrop.org/en/master/development/release_management.html | 2020-05-25T04:14:24 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.securedrop.org |
Please login or sign up. You may also need to provide your support ID if you have not already done so.
DB2 High Availability provides 24x7 availability for your DB2 database. It consists of the DB2 High Availability Disaster Recovery (HADR) feature, the DB2 Online Reorganization feature, and IBM Tivoli System Automation for Multiplatforms.
IBM DB2 High Availability Feature is identified by current IBM DB2 RDBMS pattern on both Unix and Windows Platforms and modelled as a Detail node linked to the IBM DB2 RDBMS SI | https://docs.bmc.com/docs/display/Configipedia/IBM+DB2+High+Availability+Feature | 2020-05-25T05:46:35 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.bmc.com |
Why are we open sourcing our extensions?
We plan to open source many of our solutions, listed on library of tooling and guidance solutions, to share them as sample code, foster community collaboration and innovation.
Our mission is to “provide professional guidance, practical experience and gap-filling solutions”. When we ship a solution, we need to switch context to other gaps, limiting our ability to effectively invest in long-term maintenance and innovation. By embracing open source, we hope to enable the community to review the solutions, help fix bugs and contribute features they need.
Our open source solutions
2016.09.08 – Add latest OSS projects.
- Countdown Widget Extension
- Extracting effective permissions from TFS
- File Owner Extension
- Folder Management Extension
- Migrate assets from RM server to VSTS
- Print Cards Extension
- Roll-Up Board Widget Extension
- Sample Data Widget Extension
- Show Area Path Dependencies Extension
… more to come!
What are your thoughts on open source?
We look forward to hearing from you. Here are some ways to connect with us:
- Add a comment below
- Send us a tweet @almrangers | https://docs.microsoft.com/en-us/archive/blogs/visualstudioalmrangers/why-are-we-open-sourcing-our-extensions | 2020-05-25T06:21:52 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.microsoft.com |
Service Pack 1: version 8.3.01
This topic contains information about fixes and updates in BMC Server Automation 8.3 Service Pack 1 (product version 8.3.01), and provides instructions for downloading and installing the service pack.
Tip
For information about issues corrected in this service pack, see Known and corrected issues.
Enhancements
The following topics describe the updates and enhancements included in this service pack.
- Agent installation updates for SP1
- BLCLI updates for SP1
- Compliance and SCAP functionality updates for SP1
- Console management updates for SP1
- Patch management updates for SP1
- Security updates for SP1
- Virtualization updates for SP1
Downloading the service pack
Service Pack 1 for BMC Server Automation 8.3.00 includes full installers for all components. You can download the files for version 8.3.01 from the BMC Electronic Product Distribution (EPD) website. For download instructions, see Downloading the installation files. For a list of installation programs by OS, see Installation programs for BMC Server Automation.
Installing the service pack as a fresh product installation
If you are installing this service pack as a fresh product installation, download the Service Pack files and then follow the instructions in Performing the installation.
As an added step after configuring the database, run the scripts found in the 83-SP1-SQL_Update_Scripts.zip file. This file is available in the BMC Server Automation 8.3.01 section of the EPD. Run these scripts as the BMC Server Automation database owner (as provided to blasadmin). For more detailed instructions, see Running the SQL update scripts.
Upgrading to the service pack
To upgrade to the service pack, follow the instructions in Upgrading.
Note
The installation programs for BMC Vendor Patch Management were not upgraded for service packs. If you have installed version 8.3.00 for any of these programs, you do not need to upgrade these programs when upgrading to the service pack level.
Note the following special tasks that you must perform during an upgrade to a service pack, after upgrading the Application Server and Console:
Considerations in service pack implementation for patch management on Windows
If you are using Microsoft Windows, review the following considerations for patch management.
Resolving a potential RSCD Agent installation problem
If the target system is in a pending reboot state (for example, from a previous patch or other software installation), you can use the PENDINGREBOOTFORCE flag to allow the MSI installer to ignore the pending reboot state and allow the installation to proceed. After you confirm that the pending reboot state on the target is not a result of a previous RSCD installation, you can set
PENDINGREBOOTFORCE=1. Following is a command-line sample for the MSI installation program:
msiexec /I RSCD83-SP1.WIN32.msi /qn REBOOT=ReallySuppress PENDINGREBOOTFORCE=1 | https://docs.bmc.com/docs/ServerAutomation/83/release-notes-and-notices/service-pack-1-version-8-3-01 | 2020-05-25T05:52:49 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.bmc.com |
Upgrading on Windows using the unified product installer
This topic describes how to use the unified product installer to interactively upgrade BMC Server Automation on 64-bit Windows platforms. It includes the following sections:
Note
You can alternatively use the unified product installer to upgrade in an unattended (silent) mode. For more information, see Upgrading silently using the unified product installer.
List of components that are upgraded by the unified product installer
List of components that are not upgraded by the unified product installer
Before you begin
Ensure that your environment meets the requirements discussed in Preparing for a Windows upgrade using the unified product installer.
Note
As of BMC Server Automation 8.6 SP1, the task of running SQL Update scripts, which was necessary in the past for any upgrade to a BMC Server Automation patch or service pack, is no longer required during an upgrade. The database upgrade is now handled internally by the unified product installer, which was introduced in version 8.6.
Note
If you are upgrading from BMC Server Automation 8.6 SP1 to BMC Server Automation 8.6 SP2 (8.6.01.106) and you have RSA authentication configured in your environment, the deployment fails to migrate to the latest version and the upgrade is not even attempted. As a workaround, disable RSA authentication before you start the upgrade. When you finish upgrading, then re-enable RSA authentication. For more information, see Implementing RSA SecurID authentication.
To upgrade BMC Server Automation using the unified product installer
Tip
Prior to upgrading your production environment, it is best practice to test the upgrade in a duplicated environment.
Warning
Ensure that you use a fresh copy of the installation folder (and its contents), for each environment that the unified product installer is run in. The unified product installer saves environment-related information in the installation folder (Disk1), which will cause errors if the same copy is used for upgrading different environments.
Download and extract the installation package appropriate for the operating system level and hardware platform in a
<temporary location>. The package follows the naming convention BBSA<version>-<platform>.zip, and contains the unified product installation program files.
Download this package to the host computer of the Application Server that was set up as a configuration server (for more about this type of Application Server setup, see Application Server types).
Note
Make sure that the temporary location where you extract the installation package does not contain the string nsh in its path.
Do not extract the unified product installer into a directory that contains a space in it, for example, E:\BMC Upgrade_8.8_Package\. The extra space can cause unexpected errors during the upgrade. Create a directory like E:\BMC_Upgrade_88_Package\ instead.
Extract the RSCDAgent.zip file and copy the rscd folder to the following location before running the unified product installer (The unified product installer uses the RSCD installers while installing or upgrading BMC Server Automation in your environment):
<temporary location>
\files\installer\
Run the installation file for BMC Server Automation (setup.exe).
Follow the instructions in the installation wizard and. For more information, on Authentication profile credentials, see Setting up an authentication profile and Implementing authentication.
- The unified product installer program displays the different types of servers that are present in the BMC Server Automation environment and their count.
If the unified product installer was successful in connecting with all servers, you can proceed with the installation. Skip to step 8.
Otherwise, click Next to continue to step 7.
- If any of the remote servers do not have an RSCD Agent installed, the wizard displays a list of those servers. You can choose from the following options:
- Manually install an RSCD Agent on each of the listed remote servers, and then resume the installation through the unified product installer.
- Authorize the unified product installer to install an RSCD Agent on each of the listed remote servers by providing the following information:
The name of a local super user (local Administer or Administrator-equivalent local user on Windows) to which the RSCD Agent should map incoming connections during the installation.
The default is Administrator.
Note: The installer does not validate the specified user to ensure that it is present and has administrator privileges on each of the target machines.
- Host name or IP address of the PSExec host computer.
- User credentials (user name and password) for establishing an SSH connection to the remote hosts.
If user credentials are the same on all remote servers, select the Use Common Credentials check box, and enter credentials in the fields below the check box. Otherwise, clear the check box and enter credentials for each of the servers directly into the table that lists the servers.
Click Install to proceed with the upgrade of all BMC Server Automation components that are present in your environment.
Notes
If product components are detected during the upgrade on remote Windows machines in the BMC Server Automation environment, installers are automatically copied to the C:\BBSAInstallerDumpDir directory on the remote machines. These installers are used to automatically upgrade product components on those machines.
If problems arise during the upgrade, the on-screen error messages contain instructions and guidance to help you troubleshoot the problems, and further information is available in the log files. For a list of log files written during the upgrade process, see the Troubleshooting section..
During the upgrade, the original Application Server deployments are backed up. The backup files are stored in <installation directory>/br.
Where to go from here BMC Server Automation Console and Upgrading the RSCD Agent using an Agent Installer Job.
Warning
Modifying the name or path of these depot objects may cause errors in the agent installation process.
- Upgrade any remaining product components that were not upgraded by the unified product installer.
- If you adjusted security settings before the upgrade (as described in the troubleshooting instructions for security settings), remember to re-adjust your security settings, based on your unique needs and the IT security policies at your organization.
For Windows patching, you may still be using the PD5.cab, or HF7b.cab configuration files for Windows patching. However, BMC Server Automation 8.6 and later versions do not support the PD5.cab, or HF7b.cab configuration files, and you must use the PD5.xml, or HF7b.xml files instead. The Windows catalog update job fails if you use the .cab configuration files. To update the configuration files used for Windows patching see, Global configuration parameters.
- To fully support TLS version 1.2 as the default communication protocol used by the RSCD Agent in BMC Server Automation 8.9.01 or later, ensure that also the Network Shell component is upgraded to version 8.9.01 or later on any computer that hosts the BMC Server Automation Console. The Network Shell is normally upgraded together with the BMC Server Automation Console. | https://docs.bmc.com/docs/ServerAutomation/86/upgrading/upgrading-on-windows-using-the-unified-product-installer | 2020-05-25T06:09:17 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.bmc.com |
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.2.4. Buildmaster Setup¶
2.2.4.1..)
Your master will need a database to store the various information about your builds, and its configuration.
By default, the
sqlite3 backend will be used.
This needs no configuration, neither extra software.
All information will be stored in the file
state.sqlite.
Buildbot however supports multiple backends.
See Using A Database Server for more options.
Buildmaster Options¶
This section lists options to the
create-master command.
You can also type
buildbot create-master --help for an up-to-the-moment summary.
--no-logrotate
¶
This disables internal worker log management mechanism. With this option worker does not override the default logfile name and its behaviour giving a possibility to control those with command-line options of twistd daemon.
--relocatable
¶
This creates a “relocatable”
buildbot.tac, which uses relative paths instead of absolute paths, so that the buildmaster directory can be moved about.
--config
¶
The name of the configuration file to use. This configuration file need not reside in the buildmaster directory.
--log-count
¶
This is the number of log rotations to keep around. You can either specify a number or
Noneto keep all
twistd.logfiles around. The default is 10.
2.2.4 from Buildbot 0.8.x for a guide to upgrading from 0.8.x to 0.9.x. | https://docs.buildbot.net/2.5.0/manual/installation/buildmaster.html | 2020-05-25T04:26:46 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.buildbot.net |
Setting up inter-node communication
The nodes in a cluster setup communicate with one another using the following inter-node communication mechanisms:
- Nodes that are within the network (same subnet) communicate with each other through the cluster backplane. The backplane must be explicitly set up. See the detailed steps listed below.
- Across networks, steering of packets is done through a GRE tunnel and other node-to-node communication is routed across nodes as required. Note:
- A cluster can include nodes from different networks from NetScaler 11.0 onwards.
- In an L3 cluster deployment, packets between NetScaler appliance nodes are exchanged over an unencrypted GRE tunnel that uses the NSIP addresses of the source and destination nodes for routing. When this exchange occurs over the internet, in the absence of an IPsec tunnel, the NSIPs is exposed on the internet, and this can result in security issues. Citrix advises customers to establish their own IPsec solution when using -> NetScaler appliance MTU of server data plane is 7500 and of the client data plane is 8922, then the MTU of cluster backplane must be set to 78 + 8922 = 9000. To set this MTU, use the following command:
> set interface <backplane_interface> -mtu <value>
The MTU for interfaces of the backplane switch must be specified to be greater than or equal to 1578 bytes, if the cluster has features like MBF, L2 policies, ACLs, routing in CLAG deployments, and vPath. | https://docs.citrix.com/en-us/netscaler/11-1/clustering/cluster-setup/cluster-setup-backplane.html | 2020-05-25T05:29:18 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.citrix.com |
Agent cannot connect to the Server
When the Session Recording Agent cannot connect to the Session Recording Server, signed by a CA that the server hosting the Session Recording Agent does not trust or the server hosting the Session Recording Agent does not have a CA certificate. Alternatively, the certificate might have expired or been revoked.
Solution: Verify that the correct CA certificate is installed on the server hosting the Session Recording Agent or use a CA that is trusted.
The remote server returned an error: (403) forbidden. This is a standard HTTPS error displayed when you attempt to connect using HTTP (nonsecure protocol). The machine hosting the Session Recording Server rejects the connection because it accepts only secure connections.
Solution:.
Solution: Add the Authenticated Users group back to. The IIS might be offline or restarted, or the entire server might be offline.
Solution: Verify that the Session Recording Server is started, IIS is running on the server, and the server is connected to the network.
The remote server returned an error: 401 (Unauthorized). This error manifests itself in the following ways:
- On startup of the Session Recording Agent Service, an error describing the 401 error is recorded in the event log.
- Policy query fails on the Session Recording Agent.
- Session recordings are not captured on the Session Recording Agent.
Solution: Ensure that the NT AUTHORITY\Authenticated Users group is a member of the local Users group on the Session Recording Agent. | https://docs.citrix.com/en-us/session-recording/current-release/troubleshooting/session-recording-agent-cannot-connect.html | 2020-05-25T04:40:38 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.citrix.com |
Corda repo layout
The Corda repository comprises the following folders:
- buildSrc contains necessary gradle plugins to build Corda
- client contains libraries for connecting to a node, working with it remotely and binding server-side data to JavaFX UI
- confidential-identities contains experimental support for confidential identities on the ledger
-
- experimental contains platform improvements that are still in the experimental stage
-
- samples contains all our Corda demos and code samples
- testing contains some utilities for unit testing contracts (the contracts testing DSL) and flows | https://docs.corda.net/docs/corda-os/3.0/corda-repo-layout.html | 2020-05-25T05:29:50 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.corda.net |
WooCommerce WooCommerce > Settings > Integration > Product Support you can configure the default topic settings.
On this page, you will be able to set the default topic title, as well as the default topic content to use when creating the initial thread for each product forum. Both of the fields will be able to leverage the the name of the product that the forum is being created for, by using the %product_title% placeholder text. We have provided some default text for you out of the box, but you can customize as much as you want.
Usage
When creating or editing a product, you will find a new metabox labeled Product Support in the right-hand sidebar. Within this metabox you can optionally enable support for the product and select an existing group/forum or create a new one.
If you are choosing to use BuddyPress Groups for support, and choose the “Create new group” option, the group will be made upon product publish, and take its name from the name given to the product. When a user purchases any product(s) that has support enabled they will automatically be added to all associated groups.
If you are choosing to use bbPress, and choose new group/forum you can also optionally create the first discussion topic (based on the plugin settings in WooCommerce > Settings > Integration > Product Support). This first discussion topic will be made sticky and also locked so that it always appears at the top and is not open to discussion by users.
bbPress integration users will automatically gain access to ALL support forums (as this is the intended behavior of bbPress).
If you prefer users to only access forums for their purchased products we recommend using BuddyPress.
Please note: If you are using a subscription product, the customer will lose access to the forum after the subscription has ended. | https://docs.pluginize.com/article/107-woocommerce-product-support-introduction | 2020-05-25T04:27:01 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.pluginize.com |
Naming conventions
Some services in Catel support naming conventions. For example, the
IViewLocator and
IViewModelLocator allow naming conventions to prevent a user from having to register all views and view models. Internally, the naming conventions are resolved using the
NamingConvention helper class. This part of the documentation explains the possible constants in naming conventions.
[AS] constant
The [AS] constant will be replaced by the assembly name. For example, the following naming convention:
[AS].Views
in assembly Catel.Examples will be resolved as:
Catel.Examples.Views
[VM] constant
The [VM] constant will be replaced by the name of the view model without the ViewModel postfix. For example, the following naming convention:
[AS].ViewModels.[VW]ViewModel
in assembly Catel.Examples and for type Catel.Examples.ViewModels.MyViewModel will be resolved as:
Catel.Examples.ViewModels.MyViewModel
[VW] constant
The [VW] constant will be replaced by the name of the view without the View, Control, Page or Window postfixes. For example, the following naming convention:
[AS].Views.[VM]View
in assembly Catel.Examples and for type Catel.Examples.Views.MyView will be resolved as:
Catel.Examples.Views.MyView
[UP] constant
Sometimes it is not possible to use the [AS] constant because the assembly name is not used in the namespace. For example, for an application called PersonApplication where the client assembly is PersonApplication.Client, the root namespace will still be PersonApplication. Therefore, it is recommend to use the [UP] constant for this situation.
The [UP] constant will move the namespaces up by one step. It automatically detects the right separator (\ (backslash), / (slash), . (dot) and | (pipe) are supported).
The following naming convention:
[UP].Views.[VM]View
for type Catel.Examples.ViewModels.MyViewModel will be resolved as:
Catel.Examples.Views.MyView
[CURRENT] constant
Some people prefer to put classes into the same namespace (such as views and view models).
The [CURRENT] constant will use the same namespace.
The following naming convention:
[CURRENT].[VM]View
for type *Catel.Examples.MyViewModel* will be resolved as:
Catel.Examples.MyView
Have a question about Catel? Use StackOverflow with the Catel tag! | https://docs.catelproject.com/5.8/catel-mvvm/locators-naming-conventions/naming-conventions/ | 2020-05-25T03:38:15 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.catelproject.com |
Authentication With Keystone¶
Glance, users with admin context, or tenants/users with whom the image has been shared.
Configuring the Glance servers to use Keystone¶. API to use Keystone¶
Configuring Glance API to use Keystone is relatively straight
forward. The first step is to ensure that declarations for the two
pieces of middleware exist in the
glance-api-paste.ini. Here is
an example for
authtoken:
[filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory auth_url = project_domain_id = default project_name = service_admins user_domain_id = default username = glance_admin password = password1234
The actual values for these variables will need to be set depending on
your situation. For more information, please refer to the Keystone
documentation on the
auth_token middleware.
In short:
The
auth_urlvariable points to the Keystone service. This information is used by the middleware to actually query Keystone about the validity of the authentication tokens.
The auth credentials (
project_name,
project_domain_id,
user_domain_id,
username, and
password) will be used to retrieve a service token. That token will be used to authorize user tokens behind the scenes.¶. | https://docs.openstack.org/glance/latest/admin/authentication.html | 2020-05-25T04:43:38 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.openstack.org |
": { } }
Node Errors
>>IMAGE." } } | https://docs.losant.com/workflows/data/http/ | 2020-05-25T04:21:12 | CC-MAIN-2020-24 | 1590347387219.0 | [array(['/images/workflows/data/http-node.png', 'HTTP Node HTTP Node'],
dtype=object)
array(['/images/workflows/data/http-node-request-config.png',
'HTTP Node Basic Configuration HTTP Node Basic Configuration'],
dtype=object)
array(['/images/workflows/data/http-node-header-config.png',
'Header Configuration Header Configuration'], dtype=object)
array(['/images/workflows/data/http-node-auth-config.png',
'Authorization Configuration Authorization Configuration'],
dtype=object)
array(['/images/workflows/data/http-node-ssl-config.png',
'SSL Configuration SSL Configuration'], dtype=object)
array(['/images/workflows/data/http-node-response-config.png',
'Response Configuration Response Configuration'], dtype=object)
array(['/images/workflows/data/http-node-error-config.png',
'Error Configuration Error Configuration'], dtype=object)] | docs.losant.com |
A paged scroll view that shows a collection of photos.
This view provides a light-weight implementation of a photo viewer, complete with pinch-to-zoom and swiping to change photos. It is designed to perform well with large sets of photos and large images that are loaded from either the network or disk.
It is intended for this view to be used in conjunction with a view controller that implements the data source protocol and presents any required chrome.
The data source for this photo album view.
This is the only means by which this photo album view acquires any information about the album to be displayed.
The delegate for this photo album view.
Any user interactions or state changes are sent to the delegate through this property.
Whether zooming is enabled or not.
Regardless of whether this is enabled, only original-sized images will be zoomable. This is because we often don't know how large the final image is so we can't calculate min and max zoom amounts correctly.
By default this is YES.
Whether small photos can be zoomed at least until they fit the screen.
By default this is YES.
The background color of each photo's view.
By default this is [UIColor blackColor].
An image that is displayed while the photo is loading.
This photo will be presented if no image is returned in the data source's implementation of photoAlbumScrollView:photoAtIndex:photoSize:isLoading:.
Zooming is disabled when showing a loading image, regardless of the state of zoomingIsEnabled.
By default this is nil.
Notify the scroll view that a photo has been loaded at a given index.
You should notify the completed loading of thumbnails as well. Calling this method is fairly lightweight and will only update the images of the visible pages. Err on the side of calling this method too much rather than too little.
The photo at the given index will only be replaced with the given image if photoSize is of a higher quality than the currently-displayed photo's size.
The current center page index.
This is a zero-based value. If you intend to use this in a label such as "page ## of n" be sure to add one to this value.
Setting this value directly will center the new page without any animation..
The number of pixels on either side of each page.
The space between each page will be 2x this value.
By default this is NIPagingScrollViewDefaultPageMargin.
The type of paging scroll view to display.
This property allows you to configure whether you want a horizontal or vertical paging scroll view. You should set this property before you present the scroll view and not modify it after.
By default this is NIPagingScrollViewHorizontal..
Dequeues a reusable page from the set of recycled pages.
If no pages have been recycled for the given identifier then this will return nil. In this case it is your responsibility to create a new page.
The current center page view.
If no pages exist then this will return nil.
Returns YES if there is a next page.
Returns YES if there is a previous page.
Move to the next page if there is one.
Move to the previous page if there is one.
Move to the given page index with optional animation and option to enable page updates while scrolling.
NOTE: Passing YES for moveToPageAtIndex:animated:updateVisiblePagesWhileScrolling will cause every page from the present page to the destination page to be loaded. This has the potential to cause choppy animations.
Move to the given page index with optional animation.
Stores the current state of the scroll view in preparation for rotation.
This must be called in conjunction with willAnimateRotationToInterfaceOrientation:duration: in the methods by the same name from the view controller containing this view.
Updates the frame of the scroll view while maintaining the current visible page's state.
The user has double-tapped the photo to zoom either in or out. | https://docs.nimbuskit.info/NIPhotoAlbumScrollView.html | 2020-05-25T04:26:56 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.nimbuskit.info |
A Parasoft Docker image is a template that includes the Parasoft Virtualize Server, Parasoft Continuous Testing Platform, Parasoft Data Repository Server, all required software dependencies (e.g., Apache Tomcat, JRE …), and default configuration settings for connecting to Parasoft License Server. In this section:
Prerequisites
Ensure that you meet the system requirements for the following components:
Licensing
The machine ID used for licensing Parasoft products deployed in Docker is derived from the Docker container ID. If you clone the Docker container, the clone will have a different machine ID and require a new license unless you have a floating license. For floating license issued from License Server, we recommend using License Server on the network outside of Docker. Contact your Parasoft representative if you have any additional questions about licensing.
Configuring a New Docker Image
To deploy on Docker, you configure default connection details in a configuration file, build Docker images, then run the soavirt Docker image.
- Download and install Docker for Windows as described in. This page contains Windows instructions and links to Linux and Mac instructions.
- Launch a Command Prompt and change directory (cd) to the folder where the Parasoft Docker files were extracted. This folder will contain the following subfolders
- ctp
- datarepository
- server-jre8
- soavirt
- tomcat8
(Optional) Edit the default values for environment variables in the soavirt/Dockerfile file and ctp/Dockerfile file. The ENV command declares a new environment variable. The environment variable name and value should be separated by a space.
How to Exclude Data Repository
If you do not want to include Data Repository in the Docker image, change the first line of the soavirt/Docker file from
FROM datarepositoryto
FROM tomcat8
Build Docker images for each of the sub folders by executing the following commands in the Docker Terminal in this order:
docker build -t server-jre8 server-jre8/ docker build -t tomcat8 tomcat8/ docker build -t datarepository datarepository/ docker build -t soavirt soavirt/ docker build -t ctp ctp/
Execute a command to run the newly created Docker image using the following format:
docker run -it --rm -p 8080:8080 -p 9080:9080 ctp
This will start up the Data Repository server, Virtualize server, and CTP.
CTP already running outside of the Docker container
If CTP is already running outside of the Docker container, run the soavirt Docker image instead:
docker run -it --rm -p 2424:2424 -p 9080:9080 soavirt
The
-itoption makes the running Docker container interactive so it will continue running until you press Ctrl-C in the terminal.
The
-rmoption configures CTP and the Virtualize server as a disposable sandbox. Upon shutdown, the Docker container will be removed and discard any changes to the CTP database, Virtualize workspace, and Data Repository. Don't use this option if you want to be able to shut down CTP and Virtualize and then pick up where you left off after a restart.
Setting
-p 2424:2424maps port 2424 from the Docker container to port 2424 in the host (for Data Repository).
Setting
-p 8080:8080maps port 8080 from the Docker container to port 8080 in the host (for CTP).
Setting
-p 9080:9080maps port 9080 from the Docker container to port 9080 in the host (for Virtualize).
You should now see the Virtualize server listed in CTP and be able to use the CTP web interface (by default, at) to create virtual assets or upload .pva files.
How to Create a Docker Image with Only CTP (No Data Repository or Virtualize)
If you want to create a Docker image with just CTP and no Data Repository or Virtualize, change the first line of the ctp/Dockerfile from
FROM soavirt to
FROM tomcat8 and then rebuild the ctp image.
Changing the Configuration
If you want to override the default configuration (for example, to use a different CTP or License Server) without rebuilding the soavirt and ctp images, do the following:
- Shut down the running container.
- Override the environment variables in the run command using the
-eoption and environment variable name/value separated with an equals sign.
docker run -it --rm -p 2424:2424 -p 9080:9080 -e CTP_HOST=em.acme.com -e CTP_PORT=8080 -e LICENSE_SERVER_HOST=ls.acme.com soavirt
docker run -it --rm -p 2424:2424 -p 9080:9080 -e CTP_HOST=10.10.255.47 -e CTP_PORT=8080 -e LICENSE_SERVER_HOST=license.parasoft.com soavirt
You should now see the Virtualize server listed in CTP and be able to use the CTP web interface (by default, at) to create virtual assets or upload .pva files.
Changing the Configuration Defaults Inside Docker Images
You can change the configuration defaults inside the Docker image, which involves rebuilding the ctp and soavirt images.
- Shut down the Docker container (e.g., press Ctrl-C in the terminal).
- Delete the soavirt and ctp images:
- At the Docker terminal, enter the following command:
docker images
- Remove the ctp image by entering the following command
docker rmi ctp
- Remove the soavirt image by entering the following command
docker rmi soavirt
- Verify that the images were removed by entering the following command
docker images
- Edit the ctp/Dockerfile and soavirt/Dockerfile files as desired.
- Rebuild from the base folder in the Docker terminal by entering the following commands
docker build -t soavirt soavirt/
docker build -t ctp ctp/ | https://docs.parasoft.com/pages/?pageId=27517192&sortBy=name | 2020-05-25T06:07:30 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.parasoft.com |
cupy.split¶
cupy.
split(ary, indices_or_sections, axis=0)[source]¶
Splits an array into multiple sub arrays along a given axis.
- Parameters
ary (cupy.ndarray) – Array to split.
indices_or_sections (int or sequence of ints) – A value indicating how to divide the axis. If it is an integer, then is treated as the number of sections, and the axis is evenly divided. Otherwise, the integers indicate indices to split at. Note that the sequence on the device memory is not allowed.
axis (int) – Axis along which the array is split.
- Returns
A list of sub arrays. Each array is a view of the corresponding input array.
See also | https://docs-cupy.chainer.org/en/latest/reference/generated/cupy.split.html | 2020-05-25T06:14:08 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs-cupy.chainer.org |
cupy.RawModule¶
- class
cupy.
RawModule(code=None, *, path=None, options=(), backend=u'nvrtc', translate_cucomplex=False)¶
User-defined custom module.
This class can be used to either compile raw CUDA sources or load CUDA modules (*.cubin).) can be loaded by providing its path, and kernels therein can be retrieved similarly.
Note
Each kernel in
RawModulepossesses independent function attributes.
Methods
Attributes | https://docs-cupy.chainer.org/en/v7/reference/generated/cupy.RawModule.html | 2020-05-25T05:11:58 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs-cupy.chainer.org |
BMC Remedy ITSM Suite installation and upgrade enhancements in Service Pack 2
BMC Remedy ITSM Suite 9.1.02 provides the following deployment-related enhancements and changes:
BMC Remedy ITSM Suite upgrade documentation enhancements
New upgrade planning document
The Planning an upgrade section takes a Q&A approach to help you plan the appropriate upgrade path, upgrade method (in-place or staged), and upgrade scope. It also provides considerations for the upgrade environment and how to get upgrade planning assistance.
Changes to recommended upgrade processes
The upgrade processes reflect the methodology recommended by BMC for upgrading your development, QA, and production environments. In version 9.1.02, the Remedy upgrade processes have been simplified.
The deprecated upgrade processes will continue to be available for reference; however, BMC strongly recommends that you follow one of the new, supported upgrade processes.
Changes to procedures for setting up staging systems
In version 9.1.01 and earlier, BMC provided multiple options for setting up staging systems when you needed to upgrade your hardware or database or when you wanted to migrate data by using the Delta Data Migration tool. These procedures were commonly referred to as the accelerated and duplicated methods for setting up staging servers.
In version 9.1.02, the options and procedures for setting up a staging system have been simplified. Instead of duplicating the entire Remedy environment, you need to install only the current AR System server version and replicate the database. Alternatively, if your environment resides on a VM, you can simply clone your environment.
For details, see Setting up systems for a staged upgrade.
Simplified upgrade documentation
Beginning with version 9.1.02, all upgrade procedures pertaining to upgrading a version earlier than 7.6.04 have been removed to reduce the number of pre-upgrade and post-upgrade steps that apply to that upgrade path. If you are upgrading from a version earlier than BMC Remedy ITSM Suite 7.6.04, you must first upgrade to BMC Remedy ITSM Suite 8.1.02. For instructions, see Upgrading in the BMC Remedy ITSM Suite 8.1 Deployment documentation.
Changes in preparing to install or upgrade
The Preparing section more clearly indicates the tasks that pertain to the upgrade or the installation process, and the planning worksheet is now more appropriately named the installation worksheet.
New location for BMC Remedy Single Sign-On documentation
You can find documentation for installing, upgrading, and configuring in the BMC Remedy Single Sign-On documentation.
BMC Remedy ITSM Suite upgrade enhancements
Automated copy of BMC Atrium CMDB attributes to BMC Remedy ITSM foundation during upgrade
The BMC Remedy IT Service Management (ITSM) 9.1 Service Pack 2 installer eliminates the phased approach to moving CMDB Asset attributes from BMC Atrium CMDB to BMC Remedy ITSM foundation (AST:Attributes). The 9.1 Service Pack 2 installer copies the attributes from BMC Atrium CMDB to AST:Attributes. CMDB Asset attributes are deleted outside of the installer by running the DeleteCMDBAttributesUtility.
For more information, see Deleting BMC Atrium CMDB attributes copied to AST:Attributes .
Processes automated during upgrade
Previously, before you upgraded to BMC Remedy ITSM suite, you had to manually change the BMC Remedy AR System Server (AR System server) configuration to disable processes and re-enable the configuration after the upgrade. When you upgrade a component in BMC Remedy AR System 9.1.02, the installer automatically disables background operations while the upgrade is in progress. After the upgrade, the initial server group configuration is restored. Manual processes such as disabling of hierarchical groups and escalations are automated when the installer runs in upgrade mode. These processes are re-enabled after the upgrade is complete.
The following table lists the server group behavior and the background processes that are disabled during upgrade.
Additionally, the Authentication chaining mode is set to ARS_ONLY and the restriction on the attachment size is removed.
AR System server production-ready configuration settings
Previously, the BMC Remedy AR System Server configuration that was shipped out of the box was modified by application installers to suit their functional requirement. In this version, the AR System server configuration shipped out of the box can be readily consumed by BMC Remedy IT Service Management (ITSM) applications. The configuration settings in the following table are out-of-the-box settings for the AR System server:
Note
During upgrade, the new default settings are updated if the settings are not already specified in Centralized Configuration. If the values are already specified, they are not updated by the installer.
The configurations in the table that are updated by the BMC Remedy AR System Server installer are benchmarked for a server with the following specifications:
- Number of servers: 2
- CPU core: 4
- RAM: 16 GB
- Disk space: 60 GB
If the configuration settings are incorrectly specified, running the BMC Remedy Configuration Utility identifies such values.
Default Java heap size settings
The BMC Remedy AR System server installer detects the available memory in the system and sets the minimum and maximum heap sizes.
The heap size values are specified based on performance benchmarking conducted in a lab environment. However, BMC recommends that you adjust the Java heap size based on your load conditions.
For more information about configuration and Java heap size settings, see Centralized configuration and Sizing baseline.
Faster BMC Remedy ITSM Suite upgrades
BMC lab tests indicate an overall 40 percent reduction in time to upgrade to BMC Remedy ITSM Suite 9.1.02. In addition to implementing the preceding upgrade enhancements, the upgrade experience has considerably improved by implementing the following:
- Previously, the BMC Remedy ITSM applications installation and upgrade caused the AR System serve to restart. In the new implementation, only the AR plug-in server is restarted, which reduces the downtime.
- BMC Remedy ITSM applications secondary servers can now be upgraded in parallel. However, for BMC Remedy AR platform components, the secondary servers need to upgraded sequentially.
Faster installation and upgrade on secondary servers
With BMC Remedy ITSM 9.1.02, the installer no longer unpacks all .def and .arx files that are required during a secondary server installation or upgrade. If required, these files are referenced from the primary server. This changes results in faster installation and upgrade on secondary servers.
BMC Remedy Configuration Check utility enhancements
Integration of the BMC Remedy Configuration Check utility with the BMC Remedy ITSM Suite installers
In version 9.1.02, the installer invokes pre-upgrade checks for the component being upgraded. Additionally, the installer checks for the following deployment considerations and accordingly invokes the required checks without requiring any user input:
- Product being deployed
- Installer mode: install or upgrade
- Server being upgraded: primary or secondary
You access the Configuration check report from the installer. The HTML reports generated by BMC Remedy Configuration Check utility display improved readability, and the error messages provide the required details to resolve issues quickly.
Optionally, you can run the Configuration Check utility at any time to check the environment or configuration of a component that is already installed or that you plan to upgrade.
BMC Remedy Configuration Check installed with AR System server
With version 9.1 Service Pack 2, the BMC Remedy Configuration Check utility is installed at <InstallDirectory>\ARSystem\arsystem as part of the AR System server installation
Updated configuration checks
The following table lists the new or updated configuration checks in version 9.1.02 of the BMC Remedy Configuration Check utility.
Deprecated configuration checks
The following table lists the new and updated configuration checks in version 9.1.02:
BMC Remedy ITSM database and platform support
With this release, the following database management system and enterprise application platform are supported:
Microsoft SQL Server 2016
- Red Hat JBoss Enterprise Application Platform 7.0
For all supported environments, required third-party software, and compatibility with other BMC products and integrations, see the Compatibility matrix.
Oracle Data Guard support
With this release, BMC Remedy ITSM Suite supports Oracle Data Guard. Oracle Data Guard helps recover from DB failures across the geographical sites. Clients can failover to the next available database in the Oracle Real Applications Cluster (RAC) setup. For information on configuring BMC Remedy AR System to support Oracle Data Guard, see Configuring Oracle Data Guard. | https://docs.bmc.com/docs/brid91/en/bmc-remedy-itsm-suite-installation-and-upgrade-enhancements-in-service-pack-2-825209907.html | 2020-05-25T06:23:21 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.bmc.com |
OLEXP: Using Virus Protection Features in Outlook Express 6
There's a new e-mail virus going around according to a number of stories I've seen. Here's one account from All Headline News:
New Internet Virus Spreading Fast
The "Bagle" or "Beagle" worm arrives in an e-mail with the subject "hi" and the word "test" in the message body. If the accompanying attachment is executed, the worm is unleashed and tries to send itself to all e-mails listed in the user's address book.
As always, apply the latest patches, update your virus signatures, and encourage your friends and family to use caution when opening attachments in e-mail.
Here's a KB article that can help you secure a machine from theats like these. It might be a good idea to do this at home if you have kids or other family members who might not understand security:;en-us;291387&sb=tech | https://docs.microsoft.com/en-us/archive/blogs/brianjo/olexp-using-virus-protection-features-in-outlook-express-6 | 2020-05-25T06:05:34 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.microsoft.com |
Introducing the Telerik® Data Access Profiler and Tuning Advisor
Telerik Data Access Profiler and Tuning Advisor is a graphical user interface for monitoring of all the Telerik Data Access activity in your application. For example, you can monitor a production environment to see which queries are affecting performance by executing too slowly.
Starting the Telerik Data Access Profiler
You could start the Telerik Data Access Profiler via the Windows Start menu command.
Two Views For Displaying Information
Telerik Data Access produces two different kinds of data - metrics and events. Metrics are similar to the operating system counters. They produce snapshots of the system status like counting insert, update and delete statements per second. Log events contain the real operation information, including query parameters and stack traces. The Telerik Data Access Profiler provides views for both data types - Events View and Metrics View. The Events View is the default view you are going to see when you start the Telerik Data Access Profiler for first time. You could switch to the Metrics View by using the Metrics View Toolbar Command.
Saving the Trace Data Locally
The Telerik Data Access Profiler allows you to store the trace data locally for analysis - just be clicking the Save button you will have the trace saved in a file you choose.
Two Ways For Monitoring Your Application
The Telerik Data Access Profiler allows you to monitor your application in two different ways:
- Offline monitoring - you can capture and save data about each event to a file to analyze later.
- Real-time (live) monitoring - you can download a real-time data via a Telerik Data Access implemented web service.
Both ways can be used in parallel to generate and store the data. The following topics describe how both ways have to be configured:
- How to: Configure A Fluent Model For Offline Monitoring
- How to: Configure A Fluent Model For Real-Time Monitoring
The topics in this section show how to work with the Telerik Data Access Profiler: | https://docs.telerik.com/data-access/developers-guide/profiling-and-tuning/profiler-and-tuning-advisor/data-access-profiler-introduction | 2020-05-25T05:51:13 | CC-MAIN-2020-24 | 1590347387219.0 | [array(['/data-access/images/1oatasks-oaprofiler-introduction-010.png',
None], dtype=object)
array(['/data-access/images/1oatasks-oaprofiler-introduction-020.png',
None], dtype=object) ] | docs.telerik.com |
When you want to update a base layer, but the reference machine that was used to create the original base layer is not available, you can recreate the original reference machine from the existing base layer.
Procedure
- In the Mirage Management console, expand the Image Composer node and select the Base Layers tab.
- Right-click the base layer and select Create Reference CVD from layer.
- Select a pending device and click Next.
- Select an upload policy and click Next.
- Click Finish.
What to do next
Use a Mirage restore operation to download and apply the image of the original reference machine to a selected device to serve as a new reference machine. See Restoring to a CVD After Hard Drive Replacement or Device Loss. You then update or install core applications and apply security updates on the new reference machine before you capture a new base layer using the existing reference CVD. | https://docs.vmware.com/en/VMware-Mirage/5.7/com.vmware.mirage.admin/GUID-121554FF-81FB-4CCF-9E25-13592D65AE9D.html | 2017-11-17T23:37:06 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.vmware.com |
Configure the location of the profile archives share from where FlexEngine reads and stores user profile archives and other settings that are related to the profile archives.
Procedure
- In the Group Policy Management Editor, double-click the Profile Archives setting.
- Select Enabled.
- Configure the settings for storing the profile archives. | https://docs.vmware.com/en/VMware-User-Environment-Manager/9.2/com.vmware.user.environment.manager-install-config/GUID-E7024C8E-3ACB-4D3A-BFA5-2E730C3617ED.html | 2017-11-17T23:36:35 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.vmware.com |
i. Abstract
This standard describes a conceptual and logical model for the exchange of groundwater data, as well as a GML/XML encoding with examples.
ii. Keywords
The following are keywords to be used by search engines and document catalogues.
ogcdoc, OGC document, groundwater, hydrogeology, aquifer, water well, observation, well construction, groundwater flow, groundwater monitoring, UML, GML, GroundwaterML, GWML2.
iii. Preface
Motivation
A significant portion of the global water supply can be attributed to groundwater resources. Effective management of such resources requires the collection, management and delivery of related data, but these are impeded by issues related to data availability, distribution, fragmentation, and heterogeneity: collected data are not all readily available and accessible, available data is distributed across many agencies in different sectors, often thematically fragmented, and similar types of data are diversely structured by the various data providers. This situation holds both within and between political entities, such as countries or states, impairing groundwater management across all jurisdictions. Groundwater data networks are an emerging solution to this problem as they couple data providers through a unified data delivery vehicle, thus reducing or eliminating distribution, fragmentation, and heterogeneity through the incorporation of standards for data access and data content. The relative maturity of OGC data access standards, such as the Web Feature Service (WFS) and Sensor Observation Service (SOS), combined with the rise of water data networks, have created a need for GroundWaterML2 (GWML2), a common groundwater data standard.
Historical background
Several activities have influenced the development of GWML2.
- GWML1: a GML application schema for groundwater data developed at Natural Resources Canada and used to exchange groundwater data within Canada, between Canada and the USA, and in some other international efforts (Boisvert & Brodaric, 2012).
- GWIE1: an interoperability experiment within the OGC HDWG, in which groundwater data was shared across the USA-Canada border (Brodaric & Booth, 2011).
- GW2IE: a second interoperability experiment within the OGC HDWG, that designed and tested a precursor of GroundWaterML2 (GWML2, version 2.1): a conceptual, logical, and encoding specification for the representation of core groundwater data (OGC, 2016).
- INSPIRE Data Specification on Geology – hydrogeology package : a conceptual model and GML application schema for hydrogeology (INSPIRE, 2013), with regulatory force in the European Union and for which GWML2 is expected to be an encoding candidate.
BDLISA: the French Water Information System information models for water wells and hydrogeological features (BDLISA, 2013).
The primary goal of this standard is to capture the semantics, schema, and encoding syntax of key groundwater data, to enable information systems to interoperate with such):
- Geological Survey of Canada (GSC), Canada
- U.S. Geological Survey (USGS), United States of America
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia
- Bureau of Meteorology (BOM), Australia
- Federation University Australia (FedUni), Australia
- Bureau de Recherches Géologiques et Minières (BRGM), France
- Salzburg University (U Salzburg), Austria
- Geological Survey of Canada (GSC), Canada
- U.S. Geological Survey (USGS), United States of America
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia
- Federation University Australia (FedUni), Australia
- Bureau of Meteorology (BOM), Australia
- European Commission, Directorate General – Joint Research Centre (JRC), European Union
- Polish Association for Spatial Information
- Polish Geological Institute (PGI), Poland
- Geological Surveys of Germany (GSG), Germany
- Salzburg University (U Salzburg), Austria
- Bureau de Recherches Géologiques et Minières (BRGM), France
- British Geological Survey (BGS), U.K.
- International Groundwater Resources Assessment Centre (IGRAC), UNESCO
The following organizations contributed to the initiation or development of this standard:
v. Submitters
All questions regarding this submission should be directed to the editor or the submitters:
1. Scope
This document is an OGC® conceptual, logical and encoding standard for GWML2, which represents key groundwater data. GWML2 is implemented as an application schema of the Geography Markup Language (GML) version 3.2.1, and re-uses entities from other GML application schema, most notably the OGC Observations & Measurements standard and the OGC/IUGS GeoSciML 4.0 (OGC 16-008) standard. GWML2 version 2.2 (this document) updates version 2.1, which was developed by the GW2IE (OGC, 2016), by importing GeoSciML 4.0 instead of GeoSciML 3.2.0, and by using TimeseriesML (OGC 15-042r2) instead of OGC WaterML2.0 part 1 – Timeseries.
GWML2 is designed to enable a variety of data exchange scenarios. These scenarios are captured by its five motivating use cases, including:
- a commercial use-case focused on drilling water wells with knowledge of aquifers,
- a policy use case concerned with the management of groundwater resources,
- an environmental use-case that considers the role of groundwater in natural eco-systems,
- a scientific use-case concerned with modeling groundwater systems, and
- a technologic use-case concerned with interoperability between diverse information systems and associated data formats.
GWML2 is designed in three stages, each consisting of a schema that builds on the previous stages. The three schemas include:
- Conceptual (UML): a technology-neutral schema denoting the semantics of the domain,
- Logical (UML): a GML-specific schema that incorporates the OGC suite of standards,
- XML schema (XSD): a GML syntactical encoding of the logical schema.
In addition, this standard describes general and XML-specific encoding requirements, general and XML-specific conformance tests, and XML encoding examples. The standard is designed for future extension into other non-XML encoding syntaxes, which would require each such encoding to describe the related schema, requirements and conformance classes, as well as provide examples.
The GWML2 Logical and XML schemas are organized into 6 modular packages:
- GWML2-Main: core elements such as aquifers, their pores, and fluid bodies,
- GWML2-Constituent: the biologic, chemical, and material constituents of a fluid body,
- GWML2-Flow: groundwater flow within and between containers,
- GWML2-Well: water wells, springs, and monitoring sites,
- GWML2-WellConstruction: the components used to construct a well,
- GWML2-AquiferTest: the elements comprising an aquifer test (e.g. a pumping test).
Altogether, the schemas and packages represent a machine-readable description of the key features associated with the groundwater domain, as well as their properties and relationships. This provides a semantics and syntax for the correct machine interpretation of the data, which promotes proper use of the data in further analysis. Existing systems can use GWML2 to ‘bridge’ between existing schema or systems, allowing consistency of the data to be maintained and enabling interoperability.
2. Conformance
This standard has been written to be compliant with the OGC Specification Model – A Standard for Modular Specification (OGC 08-131r3). Extensions of this standard shall themselves be conformant to the OGC Specification Model.
2.1 XML implementation
The XML implementation (encoding) of the conceptual and logical groundwater schemas is described using the XML Schema language and Schematron.
Requirements for one standardization target type are considered:
- data instances.
i.e. XML documents that encode groundwater data. As data producing applications should generate conformant data instances, the requirements and tests described in this standard effectively also apply to that target.
Conformance with this standard shall be checked using all the relevant tests specified in Annex A (normative) of this document. The framework, concepts, and methodology for testing, and the criteria to be achieved to claim conformance are specified in ISO 19105: Geographic information — Conformance and Testing. In order to conform to this OGCencoding standard, a standardization target shall implement the core conformance class, and choose to implement any one of the other conformance classes (i.e. extensions).
All requirements-classes and conformance-classes described in this document are owned by the standard(s) identified.
2.2 Use of vocabularies
Controlled vocabularies, also known as code-lists, are used in data exchange to identify particular concepts or terms, and sometimes relationships between them. For example, an organization may define a controlled vocabulary for all observed phenomena, such as water quality parameters, that are to be exchanged between parties. Some of these definitions may be related by hierarchical relationships, such as specialization, or through other relationships such as equivalence.
GroundWaterML2.0 does not define a set of vocabularies for groundwater data exchange in this version. It is envisaged that specific communities will develop local vocabularies for data exchange within the community. Future work within the Hydrology Domain Working Group could address standardized controlled vocabularies for the groundwater domain. Such vocabularies require a governance structure that allows changes to be made as definitions evolve, possibly using).
The following convention has been used throughout the document to identify attributes requiring controlled vocabularies:
- In the conceptual model, such attributes are typed with a name ending by “Type” (ex : PorosityType); and
- In the logical model this suffix becomes ‘TypeTerm’ (ex : PorosityTypeTerm).
2.3 Groundwater data
Groundwater data conforming to this standard are encoded in GML-conformant XML documents, for this version of GWML2. It is anticipated that future versions or extensions will develop additional encodings such as JSON or RDF. The standard MIME-type and sub-type for GML data should be used to indicate the encoding choice as specified in MIME Media Types for GML, namely: application/gml+xml.: OGC 15-043r3, Timeseries Profile of Observations and Measurements (2016)
- OGC: OGC 08-131r3, The Specification Model – A Standard for Modular Specification (2009)
- OGC: OGC 10-126r4, WaterML2.0 part 1 – Timeseries (2014)
- OGC: OGC 15-042r2, TimeseriesML 1.0 – XML Encoding of the Timeseries Profile of Observations and Measurements (2016)
- OGC: OGC 15-082, OGC GroundWaterML 2 – GW2IE Final Report (2016)
- OGC: OGC 16-008, OGC Geoscience Markup Language 4.0 (GeoSciML) (in publication)
- OGC: OGC 06-121r9, OGC Web Services Common Standard (2010)
- ISO / TC 211: ISO 19103:2005, Conceptual Schema Language (2005)
- ISO: ISO 8601:2004, Data elements and interchange formats – Information interchange – Representation of dates and times (2004)
- OGC: OGC 10-004r3, OGC Abstract Specification Topic 20 – Observations and Measurements (aka ISO 19156:2011) (2011)
- OGC: OGC 08-015r2, OGC Abstract Specification Topic 2 – Spatial Referencing by Coordinates (aka ISO 19111:2007) (2007)
- OGC: OGC 07-011, OGC Abstract Specification Topic 6 – Schema for Coverage geometry and functions (aka ISO 19123:2005) (2005)
- OGC: OGC 01-111, OGC Abstract Specification Topic 11 – Geographic information — Metadata (aka ISO 19115:2003) (2003)
- OGC: OGC 07-036, Geography Markup Language (aka ISO 19136:2007) (2007)
- OGC: OGC 10-004r1, Observations and Measurements v2.0 (also published as ISO/DIS 19156:2010, Geographic information — Observations and Measurements) (2010)
- OGC: OGC 10-025r1, Observations and Measurements - XML Implementation v2.0 (2011)
- OGC: OGC 08-094r1, SWE Common Data Model Encoding Standard v2.0 (2011)
- ISO/IEC: Schematron: ISO/IEC 19757-3:2006, Information technology — Document Schema Definition Languages (DSDL) — Part 3: Rule-based validation — Schematron (2006) (see)
- OGC: OGC 12-000, SensorML (2014)
- Schadow, G and McDonald, C.: Unified Code for Units of Measure (UCUM) – Version 1.8 (2009)
- OMG: Unified Modeling Language (UML). Version 2.3 (2010)
- W3C: Extensible Markup Language (XML) – Version 1.0 (Fourth Edition) (2006)
- W3C: XML Schema – Version 1.0 (Second Edition) .
NOTE: This may be contrasted with observations and sampling features, which are features of types defined for cross-domain purposes.
[ISO 19156, definition 4.4]
- 4.3 element <XML>
Basic information item of an XML document containing child elements, attributes and character data.
NOTE: From the XML Information Set ― each]
- 4.4 feature
Abstraction of a real-world phenomena.
[ISO 19101:2002, definition 4.11]
- 4.5 GML application schema
Application schema written in XML Schema in accordance with the rules specified in ISO 19136:2007.
[ISO 19136:2007]
- 4.6 GML document
XML document with a root element that is one of the elements AbstractFeature, Dictionary or TopoComplex, specified in the GML schema or any element of a substitution group of any of these elements.
[ISO 19136:2007]
- 4.7 GML schema
Schema components in the XML namespace ―‖ as specified in ISO 19136:2007.
[ISO 19136:2007]
- 4.8 measurement
Set of operations having the objective of determining the value of a quantity.
[ISO/TS 19101-2:2008, definition 4.20]
- 4.9 observation
Act of observing a property.
NOTE: The goal of an observation may be to measure or otherwise determine the value of a property.
[ISO 19156:2011 definition 4.10]
- 4.10 observation procedure
Method, algorithm or instrument, or system which may be used in making an observation.
[ISO19156, definition 4.11]
- 4.11 observation result
Estimate of the value of a property determined through a known procedure.
[ISO 19156:2011]
- 4.12 property <General Feature Model>
Facet or attribute of an object referenced by a name.
EXAMPLE: Abby’s car has the colour red, where “colour red” is a property of the car instance.
- 4.13 sampled feature
The real-world domain feature of interest, such as a groundwater body, aquifer, river, lake, or sea, which is observed.
[ISO 19156:2011]
- 4.14 sampling feature
Feature, such as a station, transect, section or specimen, which is involved in making observations of a domain feature.
NOTE: A sampling feature is purely an artefact of the observational strategy, and has no significance independent of the observational campaign.
[ISO 19156:2011, definition 4.16]
- 4.15 schema <XML Schema>
XML document containing a collection of schema component definitions and declarations within the same target namespace.
Example Schema components of W3C XML Schema are types, elements, attributes, groups, etc.
NOTE: The W3C XML Schema provides an XML interchange format for schema information. A single schema document provides descriptions of components associated with a single XML namespace, but several documents may describe components in the same schema, i.e. the same target namespace.
[ISO 19136:2007]
- 4.16.]
5. Conventions
5.1.
5.2 Requirement
All requirements are normative, and each is presented with the following template:
where /req/[classM]/[reqN] identifies the requirement or recommendation. The use of this layout convention allows the normative provisions of this standard to be easily located by implementers.
5.3 Conformance class
Conformance to this standard is possible at a number of levels, specified by conformance classes (Annex A). Each conformance class is summarized using the following template:
All tests in a class must be passed. Each conformance class tests conformance to a set of requirements packaged in a requirements class.
W3C Schema (XSD) and ISO Schematron (SCH) files are considered as part of this standard, although available online only, due to concerns about document size. Many requirements are expressed in a single XSD or SCH file although tests are listed individually in the conformance annex (one test for XSD and one test for SCH).
Schematron files explicitly specify which requirements are being tested in the title of the schematron pattern.
<pattern id="origin_elevation"> <title>Test requirement: /req/well-xsd/origin-elevation</title> <rule context="gwml2w:GW_Well"> <assert test="count(gwml2w:gwWellReferenceElevation /gwml2w:Elevation[gwml2w:elevationType/ @xlink:href=’ req/well/origin_elevation’]) = 1">A GW_Well needs at least one origin Elevation</assert> </rule> </pattern>
5.4 Identifiers
Each requirements class, requirement and recommendation is identified by a URI. The identifier supports cross-referencing of class membership, dependencies, and links from each conformance test to the requirements tested. In this standard, identifiers are expressed as partial URIs or paths, which can be appended to a base URI that identifies the specification].
5
GW GroundwaterML 2.0
TS TimeseriesML
5.6 Abbreviated terms
In this document the following abbreviations and acronyms are used or introduced:
API Application Program Interface
GeoSciML 3.2 GeoScience Mark-up Language version 3.2
GeoSciML 4.0 GeoScience Mark-up Language version 4.0
GML OGC Geography Mark-up Language
GWML1 Groundater Markup Language version 1.0 (Natural Resources Canada)
GWML2 Groundwater Markup Language version 2.0 (this standard)
GWML2-Main UML Logical Model of the primary GroundWaterML2 elements (namespace)
GWML2-Flow UML Logical Model of the elements required to capture groundwater flow (namespace)
GWML2-Constituent UML Logical Model of the groundwater fluid body constituents and their relationships (namespace)
GWML2-Well UML Logical Model of the features and properties associated with water well (namespace)
GWML2-WellConstruction UML Logical Model of the well drilling and construction details (namespace)
GWML2-AquiferTest UML Logical Model of the features and properties associated with aquifer test (namespace)
INSPIRE Infrastructure for Spatial Information in the European Community (Directive 2007/2/EC)
ISO International Organization for Standardization
IUGS International Union of Geological Sciences
NACSN North American Commission on Stratigraphic Nomenclature
NADM North American geological Data Model
OGC Open Geospatial Consortium
O&M OGC Observations and Measurements Conceptual Model
OMXML Observations and Measurements XML Implementation
SensorML Sensor Model Language
SOS Sensor Observation Service
SWE Sensor Web Enablement
TSML TimeseriesML
UML Unified Modeling Language
UTC Coordinated Universal Time
URI Universal Resource Identifier
URL Universal Resource Locator
WML2 WaterML 2.0 – Part 1
XML Extensible Markup Language
XSD W3C XML Schema Definition Language
5.7 UML notation
The diagrams that appear in this standard, including the GWML2 Conceptual and Logical schemas, are presented using the Unified Modeling Language (UML), in compliance with ISO/IEC 19505-2.
Note:Within the GWML2 conceptual and logical diagrams, the following color scheme is used to identify packages in some cases. This is just for information purposes.
Amber: GWML2 defined within this standard
Green and Purple: from GeoSciML 4.0
Blue: from O&M
5.8 Finding requirements and recommendations
This standard is identified as. For clarity, each normative statement in this standard is in one and only one place, and defined within a requirements class table and identified with a URI, whose root is the standard URI. In this standard, all requirements are associated to tests in the abstract test suite in Annex A. using the URL of the requirement as the reference identifier. Recommendations are not tested but are assigned URLs and are identified using the ‘Recommendation’ label in the associated requirements table.
Requirements classes are separated into their own clauses, named, and specified according to inheritance (direct dependencies). The Conformance test classes in the test suite are similarly named to establish an explicit and mnemonic link between requirements classes and conformance test classes.
6. Background
6.1 Technical Basis
This standard builds on a number of standards for encoding XML data, including:
- OMXML (OGC 10-025r1)
- sweCommon (OGC 08-094r1)
- GML ISO 19136:2007 (OGC 07-036)
- ISO 19139 (Metadata)
- W3C XSD
This standard also builds on existing schema, primarily Observations & Measurements (OMXML) and GeoSciML 4.0 (OGC 16-008). It accomplishes this by (a) extending these schemas with groundwater specializations, (b) referring to a class in these schema in order to type a named property, or (c) using a class from the schemas as one of the two participants in a binary relationship.
6.2 Overview of Observations & Measurements
ISO19156 – Observations and Measurements is a generic GML schema for observations. As shown in Figure 1,.”
6.2.1 Sampling features
Sampling features in O&M are defined as a “feature, such as a station, transect, section or specimen, which is involved in making observations concerning a domain feature.” Sampling features in the groundwater domain are features along which, or upon, observations are made. The most relevant are water wells and boreholes, which effectively host observations along staged intervals; a collection of these intervals and their observations constitutes a log.
6.3 Overview of GeoSciML 4.0
GeoSciML 4.0 is a GML schema for core geological entities including geological units, structures, and earth materials. It is particularly relevant to GWML2 because bodies of rock serve as containers for subsurface water bodies. Such rock bodies possess variable hydrogeologic properties according to their material composition and topological organization. Thus, geological units and earth materials are the key GeoSciML 4.0 entities required by GWML2.
GeoSciML 4.0 defines a geological unit as ‘granitic rock’ or ‘alluvial deposit’, surficial units like ‘till’ or ‘old alluvium’).”
GeoSciML 4.0 defines an earth material as “naturally occurring substance in the Earth” and intuitively refers to various types of rocks such as sandstone, granite, and gneiss.
7. Conceptual Model
The GWML2 conceptual model is designed to be technology-neutral, and focused on the semantics of the groundwater domain. It consists of five components, as well as related properties and other entities: hydrogeological units, fluid bodies, voids, fluid flow, and wells. Conceptually, these entities form a simple template for a subsurface water container: the fluid container (a unit or its materials), the fluid itself (fluid body), the spaces in the container occupied by the fluid (void), the flow of fluid within and between containers and their spaces (flow), and the natural and artificial artifacts used to withdraw, inject, or monitor fluid with respect to a container (wells, springs, monitoring sites).
Well construction details are excluded from the conceptual model, but are included in the logical model for two reasons: (1) thematic, inasmuch as well construction was considered on the periphery of groundwater science, but important to resource management as well as important to significant data exchange scenarios, and (2) practical, as it is sufficiently modeled in GWML1 and could thus be directly imported with few changes. This eliminates the need for its re-conceptualization in the GWML2 conceptual model, keeping it tightly focused.
7.1 Hydrogeological Units
These are distinct volumes of earth material that serve as containers for subsurface fluids. The boundaries of a unit are typically discriminated from those of another unit using properties related to the potential or actual ability to contain or move water. The properties can be geological or hydraulic, and typically include influences from the surrounding hydrological environment. More specifically, the conceptual model delineates two types of hydrogeological units, with slightly different orientations: aquifer-related units have boundaries delimited by the hydrogeological properties of the rock body, while groundwater basins have boundaries delimited by distinct flow regimes. Aquifer-related units are subdivided into aquifer systems, which are collections of aquifers, confining beds, and other aquifer systems. Confining beds are units that impede water flow to surrounding units, and supersede notions such as aquitards, aquicludes, and aquifuges, which are not included herein, as it is difficult to differentiate these in practice.
Several significant properties are typically attributed to hydrogeological units, such as porosity, permeability, and conductivity, but these and others are modeled more accurately here as occurring necessarily concurrent with (dependent on) voids or fluid bodies. For example, porosity, in its various forms, requires both the presence of a unit (container) and its voids, as it is typically defined as the proportion of void volume to total unit volume (i.e. volume of solid material plus voids). Likewise, properties such as hydraulic conductivity and yield require the presence of units and fluid bodies, as they are concerned with the rate of movement of a fluid through a unit. Note that permeability and hydraulic conductivity are differentiated here: permeability refers to intrinsic permeability, which measures the ability of a unit to host fluid flow, independent of fluid properties and based solely on the connectivity and size of voids, whereas hydraulic conductivity additionally considers fluid properties.
Likewise, management areas are also relational entities in the sense that they are typically necessarily linked with a unit (or system) and possibly a fluid body. Management areas are earth bodies identified for groundwater management purposes and their boundaries can be delineated by social factors, such as policy or regulation, in addition to physical factors related to hydrogeology or hydrology.
7.2 Fluid Bodies
These are distinct bodies of fluid (liquid or gas) that fill the voids in hydrogeological units. Fluid bodies are made of biologic (e.g. organisms), chemical (e.g. solutes), or material constituents (e.g. sediment). While it is expected that the major constituent of a fluid body will be water, the conceptual model allows for other types of major constituents such as petroleum. Minor constituents are not necessarily fluids, but can be gases, liquids, or solids (including organisms), and are included in the fluid body in various forms of mixture, such as solution, suspension, emulsion, and precipitates. Fluid bodies can also have other fluid bodies as parts, such as plumes or gas bubbles. Surfaces can be identified on a fluid body, such as a water table, piezometric or potentiometric surface, and some such surfaces can contain divides, which are lines projected to the fluid surface denoting divergence in the direction of flow systems within the fluid.
7.3 Voids
Voids are the spaces inside a unit (e.g. aquifer) or its material (e.g. the sandstone material of an aquifer), and might contain fluid bodies. Voids are differentiated from porosity, in that porosity is a ratio of void volume to total volume of unit plus voids, while voids are the spaces themselves. It is important to conceptually differentiate voids from units and their containers, in order to represent, for example, the volume of fractures, caves, or pores in a particular unit or its portion.
7.4 Flow
Groundwater flow denotes the process by which a fluid enters or exits a container (unit) or its voids, or flows within them. Flow between one container or void and another is named InterFlow, and flow within a container or void is named IntraFlow. Recharge is the flow into a groundwater container or void, and discharge is flow out of a groundwater container or void. The reciprocal source or destination entity can be any appropriate container or void such as a river, lake, pipe, reservoir, canyon, flood plain, ground surface, etc. A flow system is then a collection of flows ordered in a sequence from recharge to discharge, such that the flow segments of the system make up a connected flow path from source to destination. A water budget is a measure of the balance of recharge and discharge valid for a specific time and relative to a specific groundwater feature, such as a basin, aquifer, management area, or well.
Many of these concepts are depicted in Figure 2. Shown is a flow system (A+B) and two subsystems (A, B) that are its parts. Each subsystem is composed of interior flows, indicated by the solid lines with arrows, as well as input and output flows indicated as recharge and discharge, respectively. These flow systems are contained by three distinct hydrogeologic unit bodies, with the middle body oriented at an angle and having a K (hydraulic conductivity) value of 10-5. Intraflow is exemplified by a flow line within the right hydrogeologic unit body, while Interflow is exemplified by the flow from right body (the source container) to middle body (the destination container). The boundary between the bodies serves as the interface through which the flow occurs. While not shown, the three hydrogeologic unit bodies contain a groundwater body (i.e. a fluid body) in their pores (i.e. voids), and it is this groundwater body that is flowing.
7.5 Wells
Well-related entities include water wells, springs, and monitoring sites. Water wells are man-made constructions for monitoring, withdrawing, or injecting water from/into a hydrogeological unit, while springs are features where water discharges to the surface naturally. Both wells and springs possess important links to the hydrogeological environment, including their host units and materials, as well as the intersecting fluid body. Monitoring sites are locations where devices are placed to measure various properties of significance to hydrogeology, such as water level, flow rate, water temperature, or chemical composition, or to take samples. As such, monitoring sites are roles played by other features, for example, water wells or springs. | http://docs.opengeospatial.org/is/16-032r2/16-032r2.html | 2017-11-17T23:04:45 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.opengeospatial.org |
Identify Docs 1.0
An easy to use software package that allows the user to batch process a CD full of Tif images. The processing includes performing OCR on the image and stamping the image with an identifying number.
Last update 25 May. 2007 Licence Free to try | $299.00 OS Support Windows Downloads Total: 215 | Last week: 1 Ranking #191 in Office Tools Publisher Edocfile Inc.
Screenshots of Identify Docs
Identify Docs Publisher's Description
What's New in Version 1.0 of Identify Docs
none - | https://identify-docs.soft32.com/?rel=breadcrumb | 2017-11-17T22:51:14 | CC-MAIN-2017-47 | 1510934804019.50 | [] | identify-docs.soft32.com |
Discover and Command Post-deployment Checklist
After you deploy the ExtraHop Discover or Command appliance, log into the Admin UI the certificates section in the Admin UI Guide.
- DNS A Record
- It is easier to access an ExtraHop appliance by hostname than by IP address. Create an A record in your DNS root ("exa.yourdomain.local") for each ExtraHop appliance in your deployment. Refer to your DNS administration manual.
- Customizations
- The datastore is easier to restore when you periodically save customizations. Save the current datastore configuration settings. For more information, see the Customizations section in the Admin UI guide.
How can we improve? | https://docs.extrahop.com/6.2/eh-post-deployment-checklist/ | 2017-11-17T22:54:30 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.extrahop.com |
Product: Epic Wings
Product Code: 13038 (ds_ac116)
DAZ Original: Yes
Created by: etujedi
Released: August 11, 2011
Required Products: DAZ Studio 4+. Populate your scene with a flying horse or “Epic” Anubis! The possibilities are limitless.
Creator Notes
Visit our site for further technical support questions or concerns:
Thank you and enjoy your new products!
DAZ Productions Technical Support
12637 South 265 West #300
Draper, UT 84020
Phone:(801) 495-1777
TOLL-FREE 1-800-267-5170 | http://docs.daz3d.com/doku.php/artzone/azproduct/13038 | 2017-11-17T23:07:15 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.daz3d.com |
(President Barack Obama and Secretary of State Hillary Clinton)
Following his victory in the 2008 presidential election Barack Obama chose Hillary Clinton as his Secretary of State. Many pundits conjectured as to why Obama made this selection. They argued that he was following the path of Abraham Lincoln by placing his opponents in his cabinet so he could keep an eye on them and control any opposition. This view is wonderfully presented in Doris Kearns Goodwin’s TEAM OF RIVALS: THE POLITICAL GENIUS OF ABRAHAM LINCOLN, but one must ask could Goodwin’s thesis actually represent Obama’s motivation. In his new book, ALTER EGOS: HILLARY CLINTON, BARACK OBAMA, AND THE TWILIGHT STRUGGLE OVER AMERICAN POWER Mark Landler, a New York Times reporter compares Obama and Clinton’s approach to the conduct of foreign policy and how it has affected America’s position in the world. In do so Landler explores in detail their relationship on a personal, political, and ideological level. Landler delves into the differences in their backgrounds that reflect how they came to be such powerful figures and why they pursue the realpolitik that each believes in. In so doing we learn a great deal about each person and can speculate on why Obama chose what really can only be characterized as his political enemy throughout the 2008 campaign trail as his Secretary of State. What is even more interesting is their differences that can be summed up very succinctly; for Obama the key to conducting a successful foreign policy was “Don’t do stupid shit,” for Clinton, “great nations need organizing principles…don’t do stupid stuff is not an organizing principle.”
Since we are in the midst of a presidential election and it appears that Hillary Clinton will be the Democratic nominee it is important to evaluate and understand her approach to foreign relations. Landler does the American electorate a service as his book is a useful handbook in understanding and getting an idea how she would approach the major foreign policy issues that America currently faces should she assume the oval office. By comparing her with Obama we gain important insights into her thinking and how she would implement her ideas. It is clear during Obama’s first term that Clinton was the “house hawk” within his administration as she supported increases in troop deployments to Afghanistan which Obama reluctantly agreed to, but only with a set time limit; she wanted to leave a large residual force in Iraq after American withdrawal which Obama did not do; she favored funneling weapons to rebels in Syria fighting Assad as well as the creation of a no fly zone which Obama opposed; and lastly, she favored the overthrow of Muammar al-Qaddafi and the bombing of Libya when he threatened to destroy Benghazi which Obama reluctantly agreed to. Their difference are clear, Obama believes that the United States is too willing to commit to military force and intervene in foreign countries, a strategy that has been a failure and has led to a decline in America’s reputation worldwide, a reputation he promised to improve and has been partly successful with the opening to Cuba and the nuclear deal with Iran. For Clinton the calculated employment of American military power is important in defending our national interests, and that our intervention does more good than harm, especially in exporting development programs and focusing on human rights. Obama arrived on the scene as a counterrevolutionary bent on ending Bush’s wars and restoring America’s moral standing. He no longer accepted the idea that the U.S. was the world’s undisputed “hegemon” and shunned the language of American exceptionalism. Clinton has a much more conventional and political approach, “she is at heart a ‘situationist,’ somebody who reacts to problems piecemeal rather than fitting them into a larger doctrine.” Her view is grounded in cold calculation with a textbook view of American exceptionalism.
Landler describes the difficulties that Clinton had adapting to the Obama White House that is very centralized in decision-making and she had difficulty penetrating Obama’s clannish inner circle. The author also does an excellent job explaining the main players in Hillaryland and the Obama world that include Obama’s whiz kids, Denis McDonough and Ben Rhodes, and Clinton’s staffers Jake Sullivan and Huma Abedin. Since Obama was a self-confident president who had a tight grip on foreign policy, Clinton spent most of her time implementing the strategy set by the White House. During the first two years of the Obama administration Clinton pursued a global rehabilitation tour to patch up the mess that Bush left. During her second two years she did more of the heavy lifting on sensitive issues like Syria, Libya, Iran, China, and Israel which Landler dissects in detail. From her UN women’s conference address in Beijing during her husband’s administration, her lackluster attempts at bringing peace between the Palestinians and Israel, developing and implementing sanctions against Iran, her support for the rebels in Syria, and the overthrow of Qaddafi, we get unique insights into Clinton’s approach to foreign policy.
The fundamental difference or fault line between Obama and Clinton was Clinton’s vote in favor of the invasion of Iraq on October 2, 2002, a vote that Obama opposed as a state senator in Illinois. Landler does a marvelous job comparing their backgrounds and the influence of their personal experience on their worldview. Obama’s divided heritage of Hawaii, Kenya, and especially Indonesia defined him from the outset. For him Indonesia highlighted the ills of the oil companies, western development programs, and American power as it supported repressive military dictatorships to further its Cold War agenda. Obama was an anti-colonialist and could put himself in the place of third world cultures in his decision-making. Clinton on the other hand was rooted in Midwestern conservatism and her interests after law school was to try and alleviate poverty and defend the legal rights of children. Landler is correct when he states that “Clinton viewed her country from the inside out; Obama from the outside in.”
(Special envoy Richard Holbrooke)
Landler presents a number of important chapters that provide numerous insights into the Obama-Clinton relationship. Particularly important is the chapter that focuses on Richard Holbrooke, a career diplomat that dated back to Vietnam and ended with his death in 2010. A swash buckling man who did not fit into the Obama mold was brilliant, self-promoting and usually very effective, i.e., the Dayton Accords in 1995 that ended the fighting in Bosnia. He hoped as Clinton’s special envoy for Pakistan and Afghanistan to help mediate and bring some sort of closure to the conflict with the Taliban. Holbrooke rubbed Obama the wrong way and was seen as the epitome of everything Obama rejected in a diplomat and Clinton who had a very strong relationship with Holbrooke going back many years spent a great deal of time putting out fires that he caused. Another important chapter focuses on administration attempts to mediate the Palestinian-Israeli conflict. For Clinton it was a no win situation for a person who represented New York in the Senate and planned to seek the presidency on her own. Obama would force her to become engaged in the process along with special envoy, George Mitchell, and she spent a great deal of time trying to control the animosity between Obama and Israeli Prime Minister Benjamin Netanyahu. Landler’s discussion of the Obama-Netanyahu relationship is dead on as the Israeli Prime Minister and his right wing Likud supporters represented the colonialism that Obama despised. For Netanyahu, his disdain for the president was equal in kind. In dealing with the Middle East and the Arab Spring Clinton argued against abandoning Egyptian President Hosni Mubarak as she believed in the stability and loyalty to allies, Obama wanted to be “on the right side of history,” and in hindsight he was proven to be totally wrong. These views are polar opposites and helps explain Obama and Clinton’s frustration with each other that form a major theme of Landler’s narrative.
Obama’s drone policy was another source of disagreement between the President and Secretary of State. For Obama “targeted killings” was a better strategy that the commitment of massive numbers of American troops. The primacy of employing drones is the key to understanding Obama’s foreign policy. For Clinton regional stability, engagement, and the United States military is the key to a successful foreign policy. As Vasil Nasr states, Obama believes that “we don’t need to invest in the Arab Spring. We don’t need to worry about any of this; all we need to do is to kill terrorists. It’s a different philosophy of foreign policy. It’s surgical, it’s clinical, and it’s clean.”
Perhaps Landler’s best chapter deals with the evolution of Syrian policy. Internally Clinton favored aid to the Syrian rebels which Obama opposed during the summer of 2012. However, when Obama decided to walk back his position on the “Red Line” that if crossed by Assad through the use of chemical weapons, the US would respond with missile attacks. Once this policy changed to seeking Congressional approval for any missile attack, the United States gave up any hope in shaping the battlefield in Syria which would be seized by others eventually leading to ISIS. Obama needed Clinton’s support for this change. Though privately Clinton opposed the move, publicly at her own political risk she supported the president. This raises the question; how much difference was there in their approach to foreign policy? It would appear that though there were differences, Clinton was a good team player, even out of office, though as the 2016 presidential campaign has evolved she has put some daylight between her and the president. From Obama’s perspective, though he disagreed with his Secretary of State on a number of occasions he did succumb to her position on a series of issues, particularly Libya, which he came to regret. The bottom line is clear, Clinton kept casting around for solutions for the Syrian Civil War, however unrealistic. Obama believed that there were no solutions – at least none that could be imposed by the U.S. military. Another example of how the two worked together was in dealing with Iran’s nuclear program. They both agreed on the approach to be taken, a two track policy of pressure and engagement. Clinton played the bad cop enlisting a coalition of countries to impose punishing sanctions while the President sent letters to the Supreme Leader and taped greetings to the Iranian people on the Persian New Year as the good cop! But, once again they appeared to be working in lock step together.
(Secretary of State Hillary Clinton and Burmese pro-democracy leader, Aung San Suu Kyi in 2012)
The question proposed at the outset of this review was whether President Obama chose Hillary Clinton so he could keep her within the “tent” as Abraham Lincoln did. After reading ALTER EGOS there is no concrete conclusion that one can arrive at. Even at the end of Clinton’s term as Secretary of State two major diplomatic moves were made; the groundwork that would lead to a restoration of relations with Havana and an opening with Burma took place. In both cases the President and Clinton were on the same page, therefore one must conclude that though there were some bumps in the road, publicly, Obama and Clinton pursued a similar agenda and were mostly in agreement. As a result, it would appear that they are more similar than different and that the “team of rivals” concept may not fit. It seems the title ALTER EGOS could give way, perhaps to THE ODD COUPLE, a description that might be more appropriate.
(President Barack Obama and Secretary of State Hillary Clinton) | https://docs-books.com/2016/05/13/alter-egos-hillary-clinton-barack-obama-and-the-twilight-struggle-over-american-power-by-mark-landler/ | 2019-02-15T21:53:00 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs-books.com |
LDAP prerequisites and considerations
Before configuring LDAP for authentication with Splunk, make the preparations described in this topic.
Determine your User and Group Base DN
Before you map your LDAP settings to Splunk settings, figure out your user and group base DN, or distinguished name. The DN is the location in the directory where authentication information is stored.
If group membership information for users is kept in a separate entry, enter a separate DN identifying the subtree in the directory where the group information is stored. Users and groups will be searched recursively on all the subnodes under this DN. If your LDAP tree does not have group entries, you can set the group base DN to the same as the user base DN to treat users as their own group. This requires further configuration, described later.
If you are unable to get this information, contact your LDAP Administrator for assistance.
Note: For best results when integrating Splunk Enterprise with Active Directory, place your Group Base DN in a separate hierarchy than the User Base DN.
Additional considerations
When configuring Splunk Enterprise to work with LDAP, note the following:
- Entries in Splunk Web and
authentication.confare case sensitive.
- Any user created locally through Splunk native authentication will have precedence over an LDAP user of the same name. For example, if the LDAP server has a user with a username attribute (for instance, cn or uid) of 'admin' and the default Splunk user of the same name is present, the Splunk user will win. Only the local password will be accepted, and upon login the roles mapped to the local user will be in effect.
- The number of LDAP groups Splunk Web can display for mapping to roles is limited to the number your LDAP server can return in a query. You can use the Search request size limit and Search request time limit settings to configure this.
- To prevent Splunk from listing unnecessary groups, use the
groupBaseFilter. For example:
groupBaseFilter = (|(cn=SplunkAdmins)(cn=SplunkPowerUsers)(cn=Help Desk))
- If you must role map more than the maximum number of groups, you can edit
authentication.confdirectly. In this example, "roleMap_AD" specifies the name of the Splunk strategy. Each attribute/value pair maps a Splunk role to one or more LDAP groups:
[roleMap_AD] admin = SplunkAdmins1;SplunkAdmins2 power = SplunkPowerUsers user = SplunkUsers
- Splunk always uses LDAP protocol version 3, aka v3.! | https://docs.splunk.com/Documentation/Splunk/7.0.1/Security/LDAPconfigurationconsiderations | 2019-02-15T21:35:30 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
For cPanel & WHM 11.46
(Home >> Server Configuration >> Server Time)
Overview
This feature allows you to set your server’s time zone and synchronize its time with your network's time server.
Important
For your cPanel license to work properly, your server’s time must be correct. setting is incorrect.
To synchronize your server’s time with the network time server, click Sync Time with Time Server. | https://docs.cpanel.net/display/1146Docs/Server+Time | 2019-02-15T21:09:36 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.cpanel.net |
3.2.4.3 NetrWkstaUserEnum (Opnum 2)
The NetrWkstaUserEnum method returns information about users who are currently active on a remote computer.
unsigned long NetrWkstaUserEnum( [in, string, unique] WKSSVC_IDENTIFY_HANDLE ServerName, [in, out] LPWKSTA_USER_ENUM_STRUCT UserInfo, [in] unsigned long PreferredMaximumLength, [out] unsigned long* TotalEntries, [in, out, unique] unsigned long* ResumeHandle );
ServerName: A WKSSVC_IDENTIFY_HANDLE (section 2.2.2.1) that identifies the server. The client MUST map this structure to an RPC binding handle ([C706] sections 4.3.5 and 5.1.5.2). The server MUST ignore this parameter.
UserInfo: A pointer to the buffer to receive the data. The data MUST be returned as a WKSTA_USER_ENUM_STRUCT (section 2.2.5.14) structure that contains a Level member that specifies the type of structure to return.
PreferredMaximumLength: The number of bytes to allocate for the return data.
TotalEntries: The total number of entries that could have been enumerated if the buffer were big enough to hold all the entries.
ResumeHandle: A pointer that, if specified, and if this method returns ERROR_MORE_DATA, MUST receive an implementation-specific value<33> that can be passed in subsequent calls to this method, to continue with the enumeration of currently logged-on users.
If this parameter is NULL or points to zero, then the enumeration MUST start from the beginning of the list of currently logged-on users.
Return Values: When the message processing result matches the description in column two of the following table, this method MUST return one of the following values ([MS-ERREF] section 2.2). The most common error codes are listed in the following table.
Any other return value MUST conform to the error code requirements specified in Protocol Details (section 3).
The server SHOULD<34> enforce security measures to verify that the caller has the required permissions to execute this routine. If the server enforces security measures, and the caller does not have the required credentials, then the server MUST fail the call with ERROR_ACCESS_DENIED. Specifications for determining the identity of the caller for the purpose of performing an access check are in [MS-RPCE] section 3.3.3.1.3.
If the Level member of the WKSTA_USER_ENUM_STRUCT structure passed in the UserInfo parameter does not equal 0x00000000 or 0x00000001, then the server MUST fail the call.
If the Level member equals 0x00000000, then the server MUST return an array of the names of users currently logged on the computer. The server MUST return this information by filling the WKSTA_USER_INFO_0_CONTAINER (section 2.2.5.14) in the WkstaUserInfo field of the UserInfo parameter.
If the Level member equals 0x00000001, then the server MUST return an array of the names and domain information of each user currently logged on the computer, and a list of OtherDomains (section 3.2.1.3) in the computer.
If the PreferredMaximumLength parameter equals MAX_PREFERRED_LENGTH (section 2.2.1.3), the server MUST return all the requested data. Otherwise, if the PreferredMaximumLength is insufficient to hold all the entries, then the server MUST return the maximum number of entries that fit in the UserInfo buffer and return ERROR_MORE_DATA.
The following rules specify processing of the ResumeHandle parameter:
If the ResumeHandle parameter is either NULL or points to 0x00000000, then the enumeration MUST start from the beginning of the list of the currently logged on users.<35>
If the ResumeHandle parameter points to a non-zero value, then the server MUST continue enumeration based on the value of ResumeHandle. The server is not required to maintain any state between calls to the NetrWkstaUserEnum method.
If the client specifies a ResumeHandle, and if the server returns ERROR_MORE_DATA, then the server MUST set the value to which ResumeHandle points to an implementation-specific value that allow the server to continue with this enumeration on a subsequent call to this method, with the same value for ResumeHandle.
The server is not required to maintain any state between calls to the NetrWkstaUserEnum method. If the server returns NERR_Success or ERROR_MORE_DATA, then it MUST set the TotalEntries parameter to equal the total number of entries that could have been enumerated from the current resume position. | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-wkst/55118c55-2122-4ef9-8664-0c1ff9e168f3 | 2019-02-15T20:52:56 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.microsoft.com |
Social Sharing
From PeepSo Docs
When enabled, users can RePost other users’ status updates according to their privacy settings. This sharing is kept within PeepSo itself. It doesn’t allow to share the post outside of PeepSo.
Users’ profiles may also be shared to other social networks, such as Facebook and Google Plus.
Backend setting in PeepSo config: | https://docs.peepso.com/wiki/Social_Sharing | 2019-02-15T21:33:34 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.peepso.com |
API Gateway 7.6.2 Policy Developer Guide Kerberos configuration The Kerberos Configuration under Server settings > Security > Kerberos in the node tree enables you to configure instance-wide Kerberos settings on API Gateway and to upload a Kerberos configuration file to API Gateway. This configuration file contains information on the location of the Kerberos Key Distribution Center (KDC), as well as which encryption algorithms, encryption keys, and domain realms to use. You can also configure trace options for the various APIs used by the Kerberos system, such as the Generic Security Services (GSS) and Simple and Protected GSS-API Negotiation (SPNEGO) APIs. UNIX/Linux platforms ship with a native implementation of the GSS library, which API Gateway can leverage. You can specify the location of the GSS library in this configuration window. For more details on different Kerberos setups with API Gateway, see API Gateway Kerberos Integration Guide. Kerberos configuration file — krb5.conf The Kerberos configuration file (krb5.conf) defines the location of the Kerberos KDC, supported encryption algorithms, and default realms in the Kerberos system. Both Kerberos clients and Kerberos services that are configured for API Gateway use this file. Kerberos clients need to know the location of the KDC so that they can obtain a Ticket Granting Ticket (TGT). They also need to know what encryption algorithms to use and what realm they belong to. Kerberos services do not need to call the KDC to request a TGT, but they still require the information on supported encryption algorithms and default realms contained in the krb5.conf file. A Kerberos client or service identifies the realm it belongs to because the realm is appended to its Kerberos principal name after the @ symbol. Alternatively, if the realm is not specified in the principal name, the Kerberos client or service assumes the realm to be the default_realm specified in the krb5.conf file. The file specifies only one default_realm, but you can specify a number of additional named realms. The default_realm setting is in the [libdefaults] section of the krb5.conf file. It points to a realm in the [realms] section. This setting is not required. The text input field in the Kerberos configuration window displays a default configuration for krb5.conf. You can type and modify the configuration as needed, and then click OK to upload it to your API Gateway configuration. Alternatively, if you have an existing krb5.conf file that you want to use, select Load File and open to the configuration file. The contents of the file are displayed in the text area, and you can edit and upload it to API Gateway. Note Refer to your Kerberos documentation for more information on the settings that can be configured in the krb5.conf file. Advanced settings You can configure various tracing options for the underlying Kerberos API using the check boxes on the Advanced settings tab. Trace output is always written to the /trace directory of your API Gateway installation. Kerberos Debug Trace– Enables extra tracing from the Kerberos API layer. SPNEGO Debug Trace – Switches on extra tracing from the SPNEGO API layer. Extra Debug at Login– Provides extra tracing information during login to the Kerberos KDC. Native GSS library The Generic Security Services API (GSS-API) is an API for accessing security services, including Kerberos. Implementations of the GSS-API ship with the UNIX/Linux platforms and can be leveraged by API Gateway when it is installed on these platforms. The fields on this tab allow you to configure various aspects of the GSS-API implementation for your target platform. Use Native GSS Library:Select this to use the operating system's native GSS implementation. This option only applies to API Gateway installations on the UNIX/Linux platforms. Note These are instance-wide settings. If you select Use Native GSS Library, it is used for all Kerberos operations, and all Kerberos clients and services must be configured to load their credentials natively.If the native library is used, the following features are not supported:The SPNEGO mechanismThe WS-Trust for SPNEGO standard (requires the SPNEGO mechanism)The SPNEGO over HTTP standard (requires the SPNEGO mechanism)Signing and encrypting using the Kerberos session keysIt is possible to use the KERBEROS mechanism with the SPNEGO over HTTP standard, but this would be non-standard. Native GSS Library Location:If you have opted to use the native GSS library, enter the location of the GSS library in the field provided, for example, /usr/lib/libgssapi.so. On Linux, the library is called libgssapi.so. . Note This setting is only required when this library is in a non-default location. Native GSS Trace:Use this option to enable debug tracing for the native GSS library. Related Links | https://docs.axway.com/bundle/APIGateway_762_PolicyDevGuide_allOS_en_HTML5/page/Content/PolicyDevTopics/kerberos_configuration.htm | 2019-02-15T20:54:11 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.axway.com |
Configuring Client Certificate Authentication
Users logging on to a NetScaler Gateway virtual server can also be authenticated based on the attributes of the client certificate that is presented to the virtual server. Client certificate authentication can also be used with another authentication type, such as LDAP or RADIUS, to provide two-factor authentication.
To authenticate users based on the client-side certificate attributes, client authentication should be enabled on the virtual server and the client certificate should be requested. You must bind a root certificate to the virtual server on NetScaler Gateway.
When users log on to the NetScaler Gateway virtual server, after authentication, the user name client certificate as the default authentication typeTo configure the client certificate as the default authentication type
- In the configuration utility, on the Configuration tab, in the navigation pane, expand NetScaler NetScaler Gateway, users are authenticated based on certain attributes of the client certificate. After authentication is completed successfully, the user name or the user and group name of the user are extracted from the certificate and any policies specified for that user are applied. | https://docs.citrix.com/en-us/netscaler-gateway/11-1/authentication-authorization/configure-client-cert-authentication.html | 2019-02-15T22:28:47 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.citrix.com |
FIWARE Stream Oriented Generic Enabler - Overview¶
Introduction¶
The Stream Oriented Generic Enabler (GE) provides a framework devoted to simplify the development of complex interactive multimedia applications through a rich family of APIs and toolboxes. It provides a media server and a set of client APIs making simple the development of advanced video applications for WWW and smartphone platforms. The Stream Oriented GE features include group communications, transcoding, recording, mixing, broadcasting and routing of audiovisual flows. It also provides advanced media processing capabilities involving computer vision, video indexing, augmented reality and speech analysis.
The Stream Oriented GE modular architecture makes simple the integration of third party media processing algorithms (i.e. speech recognition, sentiment analysis, face recognition, etc.), which can be transparently used by application developers as the rest of built-in features.
The Stream Oriented GE’s core element is a Media Server, responsible for media transmission, processing, loading and recording. It is implemented in low level technologies based on GStreamer to optimize the resource consumption. It provides the following features:
- Networked streaming protocols, including HTTP (working as client and server), RTP and WebRTC.
- Group communications (MCUs and SFUs functionality) supporting both media mixing and media routing/dispatching.
- Generic support for computational vision and augmented reality filters. - Media storage supporting writing operations for WebM and MP4 and playing in all formats supported by GStreamer.
- Automatic media transcodification between any of the codecs supported by GStreamer including VP8, H.264, H.263, AMR, OPUS, Speex, G.711, etc.
Table of Contents¶
- Programmers Guide
- Installation and Administration Guide
- Architecture Description
- Open API Specification | https://kurento.readthedocs.io/en/stable/ | 2019-02-15T21:54:37 | CC-MAIN-2019-09 | 1550247479159.2 | [] | kurento.readthedocs.io |
Support¶
If you have questions or issues about Requests, there are several options:
Send a Tweet¶
If your question is less than 140 characters, feel free to send a tweet to @kennethreitz.
I’m also available as kennethreitz on Freenode. | http://docs.python-requests.org/en/v2.4.3/community/support/ | 2019-02-15T21:16:19 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.python-requests.org |
Committing a Backup Job for Virtual Server Agents
Administrators can commit a backup job for a subclient that protects multiple virtual machines in the following situations:
- Commit a running backup job when backups for one or more virtual machines have completed, and you want immediate access to backup data without waiting for other VMs in the job to complete.
- Commit scheduled backup jobs automatically to interrupt jobs that are running beyond a scheduled backup window.
About This Task
When the job is committed, any virtual machine backups that have completed successfully are retained, and the current index for the job is archived so that backed up VMs can be browsed for restores. All virtual machine backups that are not yet completed are marked as failed so that those virtual machines can be protected when a subsequent incremental job is run to back up unprotected virtual machines.
Notes:
- A backup job commit is available only for regular streaming backup jobs. Commit is not available for synthetic full backups, SnapProtect backups, backup copy, or catalog snapshot jobs.
- The Commit option is available for running jobs when at least one full virtual machine is completed, and at least 2 GB of data have been written for the backup job.
- Only active running jobs can be committed, not suspended, waiting, or killed jobs. If a backup job is suspended and then resumed, the resumed job must write at least 2 GB of data before the Commit option is available.
- When you run a job to back up unprotected virtual machines, any virtual machines that were marked as Failed for the original backup job are included in the new job.
Commit requests are submitted automatically for jobs that exceed the expected total running time. You can set the total running time for individual backup job or use schedule policies for Virtual Server Agent backup jobs. For more information, see Setting the Total Running Time for Jobs.
Procedure
- From the CommCell Console ribbon, on the Home tab, click Job Controller.
The Job Controller window is displayed.
- Right-click a backup job and click Commit.
Note: After backup jobs are committed, the job progress bar for a subsequent backup may not show the actual progress.
- To complete the commit request, click Yes.
Result
When a Commit request is processed, either automatically based on the scheduled backup window or manually by committing a running job, the following actions occur:
- When the job is committed, any virtual machine backups that have completed successfully are retained, and the current index for the job is archived so that backed up VMs can be browsed for restore operations.
- The backup job status changes to Archiving Index, and the index cache is updated for virtual machines that were successfully backed up.
- Any virtual machines that were successfully backed up are marked as Completed in the Job Details window for the job.
- Any virtual machines for which backups were pending or in progress are marked as Failed, so that those virtual machines can be protected when a subsequent incremental job is run to back up unprotected virtual machines.
- When the job completes, it is marked as Completed with errors in the Job Controller window.
- On the Backup History window, the job status is Committed. | http://docs.snapprotect.com/netapp/v11/article?p=products/vsa/t_vsa_backup_job_commit.htm | 2019-02-15T21:18:12 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.snapprotect.com |
]]
Parameters
Remarks.
Examples
The following example displays the Team Foundation access control lists (ACLs) for 314.cs.
c:\projects>tf permission 314.cs
The following example displays the ACL information that relates to the group "developers" for the collection that is located at.
c:\projects>tf permission /group:[teamproject]\developers /collection:
See Also
Reference
Command-Line Syntax (Version Control)
Other Resources
Tf Command-Line Utility Commands | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/0dsd05ft(v=vs.100) | 2020-11-24T00:40:00 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.microsoft.com |
In an environment with multiple availability zones, if a site failure in Region B occurs, the witness appliance in Region B becomes inaccessible. As a result, one fault domain becomes unavailable for the vSAN stretched cluster. To continue provisioning virtual machines in Region A, configure vSAN by using the vSAN default storage policy to force-provision these virtual machines although they will be non-compliant until the witness appliance rejoins Region A.You perform this operation only when multiple availability zones are configured in your environment.
Procedure
- In a Web browser, log in to vCenter Server by using the vSphere Client.
- In the Policies and profile inventory, click VM storage policies.
- On the VM storage policies page, select the vSAN default storage policyfor management vCenter Server and click Edit settings.
The Edit VM storage policy wizard opens.
- On the Name and description page, leave the default values and click Next.
- On the vSAN page, click the Advanced policy rules tab, turn on the Force provisioning toggle switch and click Next.
- On the Storage compatibility page, leave the default values and click Next.
- On the Review and finish page, click Finish.
VM Storage Policy in Use dialog box appears.
- In the vSAN Storage Policy in Use dialog box, from the Reapply to VMs drop-down menu, select Manually later and click Yes. | https://docs.vmware.com/en/VMware-Validated-Design/services/vmware-cloud-foundation-391-sddc-site-protection-and-recovery/GUID-44EB0F2B-CBA9-48BD-ADAE-8CD12ADB562D.html | 2020-11-24T01:06:16 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.vmware.com |
vector pdf content
A vector PDF content or vector data contained in PDF documents is represented by content that can be adjusted according to the media device it's displayed or printed on. Because of it's parametrical nature e.g. it can be defined using drawing operations supported by PDF, it differs from the raster data that has a static representation and can't be adjusted without the loss of quality. But purely vector documents are rare as document authors tend to include raster data in supported formats, such as photographs or scanned images. | https://docs.apitron.com/documentation/index.php?title=vector_pdf_content&oldid=24 | 2020-11-24T00:36:26 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.apitron.com |
Setting up the SIEM User and the OAuth App on the Tenant
To set up the SIEM user and OAuth app:
- On the Centrify Admin Portal, Select Apps > Web Apps.
Click Add Web Apps. Once it opens a page, click the Custom tab.
Locate the OAuth2 Client and click Add.
When prompted to add the Web App, OAuth2 Client, click Yes.
On the Settings tab, in the Application ID field, enter oauthsiem.
On the General Usage tab, leave the defaults as shown.
On the Tokens tab, for Auth methods, check Client Creds and click Save.
On the Scope tab, under Scope definitions, click Add to add a new scope.
On the Scope definitions dialog:
- In the Name field, enter siem.
In the Allowed REST APIs section, click Add, and enter Redrock/query.*
Click Save.
- On the Centrify Admin Portal, select Access > Users > Add User.
- On the Create Centrify Directory User page:
For the Login Name, enter siemuser.
For the Suffix, enter centrify.com (or leave as is).
For the Password and Confirm Password, enter the password of your choice.
- For Status:
Check Password never expires.
Select Is OAuth confidential client. This automatically also selects the options Password never expires and Is Service User
- On the Centrify Admin Portal, Select Access > Roles > Add Role.
- Once page opens, in Description tab:
For the Name, enter service account and click Save.
This entry serves as the role name.
- Open the newly created role, and select the Members tab:
Click Add and search the siemuser that you created earlier.
Click Save.
- Open the Administrative Rights tab:
Click Add.
In the Add Rights list, check Read Only System Administration
Click Add.
Click Save.
- Navigate to Apps > Web Apps > Permissions. Click Add and add the role you created above: service account.
- Perform final checks to make sure that:
On the Centrify Admin Portal, on the Access > Users tab:
The siemuser created earlier, is shown as the Centrify Directory User. Click it to open the user’s page.
In Roles section for this user, the role named service account must be listed, with Read Only System Administration in Administrative Rights.
On the Centrify Admin Portal, on the Apps > Web Apps tab:
Select OAuth2 Client
In the Permissions tab > Name column shows the earlier created role service account with the View and Run permissions checked
On the Apps tab, the Tokens section shows under Auth methods that Client Creds is checked. | https://docs.centrify.com/Content/IntegrationContent/SIEM/platform/platform-setup.htm | 2020-11-24T01:37:58 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.centrify.com |
nodetool getcompactionthroughput
Print the throughput cap in megabytes (MB) per second for compaction in the system.
cassandra-env.shThe location of the cassandra-env.sh file depends on the type of installation:
Print the throughput cap in megabytes (MB) per second for compaction in the system.
Synopsis
nodetool [options] getcompactionthroughput getcompactionthroughput getcompactionthroughput command prints the current compaction
throughput.
Examples
$ nodetool -u username -pw password getcompactionthroughput | https://docs.datastax.com/en/dse/5.1/dse-dev/datastax_enterprise/tools/nodetool/toolsGetcompactionthroughput.html | 2020-11-24T00:50:12 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.datastax.com |
End-of-Life (EoL)
Install Updates for Panorama in an HA Configuration
To ensure a seamless failover when you update the Panorama software in a high availability (HA) configuration, the active and passive Panorama peers must be running the same Panorama release with the same Applications database version. The following example describes how to upgrade an HA pair (active peer is Primary_A and passive peer is Secondary_B).
Panorama 8.0 requires the following minimum content release versions:
- Applications and Threats content release version 655
- Antivirus content release version 2137
- Upgrade the Panorama software on the Secondary_B (passive) peer.Perform one of the following tasks on the Secondary_B peer:After the upgrade, this Panorama transitions to a non-functional state because the peers are no longer running the same software release.
- Suspend the Primary_A peer to force a failover.On the Primary_A peer:
- In theOperational Commandssection (),PanoramaHigh AvailabilitySuspend local Panorama.
- Verify that state issuspended(displayed on bottom-right corner of the web interface).The resulting failover should cause the Secondary_B peer to transition toactivestate.
- Upgrade the Panorama software on the Primary_A (currently passive) peer.Perform one of the following tasks on the Primary_A peer:After you reboot, the Primary_A peer is initially still in the passive state. Then, if preemption is enabled (default), the Primary_A peer automatically transitions to the active state and the Secondary_B peer reverts to the passive state.If you disabled preemption, manually Restore the Primary Panorama to the Active State.
- Verify that both peers are now running any newly installed content release versions and the newly installed Panorama release.On theDashboardof each Panorama peer, check the Panorama Software Version and Application Version and confirm that they are the same on both peers and that the running configuration is synchronized.
-. | https://docs.paloaltonetworks.com/panorama/8-0/panorama-admin/set-up-panorama/install-content-and-software-updates-for-panorama/install-updates-for-panorama-with-ha-configuration.html | 2020-11-24T01:20:25 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.paloaltonetworks.com |
Tag clouds
A tag cloud visualization is a visual representation of text data, typically used to visualize free form text. Tags are usually single words, and the importance of each tag is shown with font size or color.
The font size for each word is determined by the metrics aggregation.
For more information, see Y-axis aggregations.
The buckets aggregations determine what information is being retrieved from your data set.
Before you choose a buckets aggregation, select the Split Tags option.
You can specify the following bucket aggregations for tag cloud visualization:
You can customize your visualization. For more information, see Customizing aggregations. highly variable data sets.
- Orientation
- You can select how to orientate your text in the tag cloud. You can choose one of the following options: Single, right angles and multiple.
- Font Size
- Enables you to set minimum and maximum font size to use for this visualization.
Viewing detailed information
For information on displaying the raw data, see Visualization Spy. | https://docs.siren.io/10.2.1/platform/en/siren-investigate/visualize/tag-clouds.html | 2020-11-24T00:59:27 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.siren.io |
How Amazon Simple Email Service (Amazon SES) uses AWS KMS
You can use Amazon Simple Email Service (Amazon SES) to receive email, and (optionally) to encrypt the received email messages before storing them in an Amazon Simple Storage Service (Amazon S3) bucket that you choose. When you configure Amazon SES to encrypt email messages, you must choose the AWS KMS customer master key (CMK) under which Amazon SES encrypts the messages. You can choose the AWS managed CMK for Amazon SES (its alias is aws/ses), or you can choose a symmetric customer managed CMK that you created in AWS KMS.
Amazon SES supports only symmetric CMKs. You cannot use an asymmetric CMK to encrypt your Amazon SES email messages. For help determining whether a CMK is symmetric or asymmetric, see Identifying symmetric and asymmetric CMKs.
For more information about receiving email using Amazon SES, go to Receiving Email with Amazon SES in the Amazon Simple Email Service Developer Guide.
Topics
Overview of Amazon SES encryption using AWS KMS
When you configure Amazon SES to receive email and encrypt the email messages before saving them to your S3 bucket, the process works like this:
You create a receipt rule for Amazon SES, specifying the S3 action, an S3 bucket for storage, and a KMS customer master key (CMK) for encryption.
Amazon SES receives an email message that matches your receipt rule.
Amazon SES requests a unique data key encrypted with the KMS CMK that you specified in the applicable receipt rule.
AWS KMS creates a new data key, encrypts it with the specified CMK, and then sends the encrypted and plaintext copies of the data key to Amazon SES.
Amazon SES uses the plaintext data key to encrypt the email message and then removes the plaintext data key from memory as soon as possible after use.
Amazon SES puts the encrypted email message and the encrypted data key in the specified S3 bucket. The encrypted data key is stored as metadata with the encrypted email message.
To accomplish Step 3 through Step 6, Amazon SES uses the AWS–provided Amazon S3 encryption client. Use the same client to retrieve your encrypted email messages from Amazon S3 and decrypt them. For more information, see Getting and decrypting email messages.
Amazon SES encryption context
When Amazon SES requests a data key to encrypt your received email messages (Step 3 in the Overview of Amazon SES encryption using AWS KMS), it includes an encryption context in the request. The encryption context provides additional authenticated data (AAD) that AWS KMS uses to ensure data integrity. The encryption context is also written to your AWS CloudTrail log files, which can help you understand why a given customer master key (CMK) was used. Amazon SES uses the following encryption context:
The ID of the AWS account in which you've configured Amazon SES to receive email messages
The rule name of the Amazon SES receipt rule that invoked the S3 action on the email message
The Amazon SES message ID for the email message
The following example shows a JSON representation of the encryption context that Amazon SES uses:
{ "aws:ses:source-account": "
111122223333", "aws:ses:rule-name": "
example-receipt-rule-name", "aws:ses:message-id": "
d6iitobk75ur44p8kdnnp7g2n800" }
Giving Amazon SES permission to use your AWS KMS customer master key (CMK)
To encrypt your email messages, you can use the AWS managed customer master key (CMK) in your account for Amazon SES (aws/ses), or you can use a customer managed CMK that you create. Amazon SES already has permission to use the AWS managed CMK on your behalf. However, if you specify a customer managed CMK when you add the S3 action to your Amazon SES receipt rule, you must give Amazon SES permission to use the CMK to encrypt your email messages.
To give Amazon SES permission to use your customer managed CMK, add the following statement to that CMK's key policy:
{ "Sid": "Allow SES to encrypt messages using this CMK", "Effect": "Allow", "Principal": {"Service": "ses.amazonaws.com"}, "Action": [ "kms:Encrypt", "kms:GenerateDataKey*" ], "Resource": "*", "Condition": { "Null": { "kms:EncryptionContext:aws:ses:rule-name": false, "kms:EncryptionContext:aws:ses:message-id": false }, "StringEquals": {"kms:EncryptionContext:aws:ses:source-account": "
ACCOUNT-ID-WITHOUT-HYPHENS"} } }
Replace
with the
12-digit ID of the AWS account in which you've configured Amazon SES to receive email
messages.
This policy statement allows Amazon SES to encrypt data with this CMK only under these
conditions:
ACCOUNT-ID-WITHOUT-HYPHENS
Amazon SES must specify
aws:ses:rule-nameand
aws:ses:message-idin the
EncryptionContextof their AWS KMS API requests.
Amazon SES must specify
aws:ses:source-accountin the
EncryptionContextof their AWS KMS API requests, and the value for
aws:ses:source-accountmust match the AWS account ID specified in the key policy.
For more information about the encryption context that Amazon SES uses when encrypting your email messages, see Amazon SES encryption context. For general information about how AWS KMS uses the encryption context, see encryption context.
Getting and decrypting email messages
Amazon SES does not have permission to decrypt your encrypted email messages and cannot decrypt them for you. You must write code to get your email messages from Amazon S3 and decrypt them. To make this easier, use the Amazon S3 encryption client. The following AWS SDKs include the Amazon S3 encryption client:
AWS SDK for Java
– See AmazonS3EncryptionClient in the AWS SDK for Java API Reference.
AWS SDK for Ruby
– See Aws::S3::Encryption::Client in the AWS SDK for Ruby API Reference.
AWS SDK for .NET
– See AmazonS3EncryptionClient in the AWS SDK for .NET API Reference.
AWS SDK for Go
– See s3crypto in the AWS SDK for Go API Reference.
The Amazon S3 encryption client simplifies the work of constructing the necessary requests to Amazon S3 to retrieve the encrypted email message and to AWS KMS to decrypt the message's encrypted data key, and of decrypting the email message. For example, to successfully decrypt the encrypted data key you must pass the same encryption context that Amazon SES passed when requesting the data key from AWS KMS (Step 3 in the Overview of Amazon SES encryption using AWS KMS). The Amazon S3 encryption client handles this, and much of the other work, for you.
For sample code that uses the Amazon S3 encryption client in the AWS SDK for Java to do client-side decryption, see the following:
Using a CMK stored in AWS KMS in the Amazon Simple Storage Service Developer Guide.
Amazon S3 Encryption with AWS Key Management Service
on the AWS Developer Blog. | https://docs.aws.amazon.com/kms/latest/developerguide/services-ses.html | 2020-11-24T01:05:12 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.aws.amazon.com |
Learn
If you're brand new to the Dagster system, we recommend starting with the tutorial, which will walk you through the most important concepts in Dagster step by step.
The Guides provide deep dives into major areas of the system, motivated by common data engineering and data science workflows
- Guides
The How-Tos provide easily digestable short answers to common questions about Dagster
The Principles are the fundamental underpinnings of Dagster's design.
The Concepts section is a reference and glossary for the fundamental principles underpinning the design of Dagster and the basic concepts of the system
- Concepts
The Demos section includes worked examples that more closely align with real-world data applications. | https://docs.dagster.io/0.8.3/docs/learn | 2020-11-24T01:02:26 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.dagster.io |
Vehicle Budgeting Projections
This report calculates a monthly miles and vehicle costs based on the yearly totals divided by the number of months since the last end of year process. Then these figures get extended out for the number of months the user inputs into the report prompt. Also in this report you can factor in an inflation percentage to add to the projected total. You will be prompted to select a facility, months since the last year closing, starting and ending department range, the number of months to project out, and inflation percentage.
This report would be great for those who have to work under a budget and have to provide figures for a new budget year. Also these figures can be used to help identify vehicles that need to be replaced.
Prints the following information:
Report is sorted by: Department
Vehicle number
Vehicle year
Vehicle make
Vehicle model
Calculated current miles
Calculated current cost
Calculated cost per mile
Projected miles
Projected cost
Projected cost with inflation percent added
Subtotals by Department, and Grand Totals on all the projected fields. | https://docs.rtafleet.com/rta-manual/best-of-crystal-reports/vehicle-budgeting-projections/ | 2020-11-24T01:32:40 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.rtafleet.com |
Dynamics AX 2012: Meet the Table Control
Meet Dynamics AX 2012's Table Control. It's an unbound control that offering unique interactions with the user providing functionality not available with other "table or tabular" controls like ListView and Grid. The table allows the developer to swap underlying controls on a cell by cell basis, and has the ability to show a large number of columns-defined
at runtime if necessary.
At it's core, the table is a simple unbound table:
It allows you to specify the number of columns/rows and as an option to show gridlines. The table allows you to indicate which types of controls can be hosted within each cell:
A powerful feature of the table is in its ability to vary the input type per CELL. This is done by the developer adding an edit control method that returns the modeled control type that should be represented in that cell
Combined with the capabilities of the individual controls, different visuals and interactive uses are possible.
With the Table control, the developer will be able to:
- Model a table control on a form
- Set number of columns and rows
- Set height and width
- Add supported contained controls for interaction within each cell
- from the AOT, select and right click on the control- select add control
- On a cell by cell basis, the control calls the editcontrol method to learn what control to host in the cell
FormControl editControl(int column, int row){
if ((column == 2) || (column == 4)){
if (row > 1){
return intEdit;
}
else
return editline;
}
else{
return editline;
}
} | https://docs.microsoft.com/en-us/archive/blogs/tlefor/dynamics-ax-2012-meet-the-table-control | 2020-11-24T01:46:40 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.microsoft.com |
Streaming Virtual Texturing (SVT) is a feature that reduces GPU memory usage and texture loading times when you have many high resolution textures in your Scene. It splits textures into tiles, and progressively uploads these tiles to GPU memory when they are needed.
SVT lets you set a fixed memory cost. For full texture quality, the required GPU cache size depends mostly on the frame resolution, and not the number or resolution of textures in the Scene. The more high resolution textures you have in your Scene, the more GPU memory you can save with SVT.
SVT uses the Granite SDK runtime. The workflow requires no additional import time, no additional build step, and no additional streaming files. You work with regular Unity Textures in the Unity Editor, and Unity generates the Granite SDK streaming files when it builds your project. | https://docs.unity3d.com/kr/2020.2/Manual/svt-streaming-virtual-texturing.html | 2020-11-24T01:03:51 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.unity3d.com |
Epson printer management details about the Epson printer and Workspace ONE UEM powered by AirWatch integration.
Overview
The Epson Printer Server for Workspace ONE UEM is an application developed on Ubuntu or Linux using the Mono framework and GTK# or GTK+. You can manage Epson printers and control its working from the Workspace ONE UEM console by configuring and deploying the following printer profiles.
- Wi-Fi
- Credentials
- Device Settings
- Custom Settings
- Simple Certificate Enrollment Protocol (SCEP)
Note: This guide assumes that you have the correct Epson software installed, and the necessary software to configure your printers. If you need more information or guidance on how to install this software, then please contact your Epson representative.
Supported Devices
Workspace ONE UEM supports the following Epson printer models:
- TM-T88V-i
- TM-T88VI and TM-T88VI-iHub. These models do not support SCEP.
Prerequisites to configure the Print Server
The following programs must be installed before running the Epson printer application:
- Mono Runtime Common 4.2.1 - If Ubuntu is used, the minimum required version is Ubuntu 16.04.
GTK# version 2 - If Ubuntu is used, install the gtk-sharp2 package. | https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/services/Epson/GUID-AWT-OVERVIEWOFEPSONPRINTER.html | 2020-11-24T00:43:42 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.vmware.com |
404 error when accessing repository?
If you have purchased the extension and you are getting a 404 error when browsing to the repository you may have to accept an invitation to our GitHub organisation. This is a step required by GitHub before we can give you access to the private repositories.
Either check your GitHub email address for an invitation or browse to:
and accept the invitation. | https://docs.airnativeextensions.com/docs/faqs/error-404/ | 2020-11-24T01:05:33 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.airnativeextensions.com |
After you install Workflow Automation (WFA) in Microsoft Cluster Server (MSCS), you must configure WFA for high availability in MSCS using configuration scripts.
You must have created a backup of WFA.
WFA
The role must be in the Running status, and the individual resources must be in the Online state. | https://docs.netapp.com/wfa-51/topic/com.netapp.doc.onc-wfa-isg/GUID-123C7D11-6352-452C-B5AA-C98B0B833465.html?lang=en | 2020-11-24T01:30:56 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.netapp.com |
The Vesicash API supports basic authentication using Access Keys (public and private keys) which we provide to you within the Vesicash dashboard.
Public keys are meant to be used from any front-end integration like the vesicash widget, escrow pay link, escrow pay buttons and in our Mobile SDKs only. By design, public keys cannot modify any part of you or your customer account details besides initiating transactions and making payments. The private keys however, are to be kept secret. If for any reason you believe your private key has been compromised or you wish to reset them, you can do so from the dashboard.
Authenticate your API calls by including your private key in the header of each request you make.
--header 'accept: application/json' \--header 'V-PRIVATE-KEY: v_EnterThePrivateKeyThatWasGeneratedHere'
Upon request, we will provide you test keys to perform test transactions. However, after testing, we must switch to using the Live keys in production when you finally deploying your implementations.
API requests made without authentication will fail with the status code
401: Unauthorized | https://docs.vesicash.com/api-documentation/api-authentication | 2020-11-24T00:44:08 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.vesicash.com |
DeleteInsight
Deletes the insight specified by the
InsightArn.
Request Syntax
DELETE /insights/
InsightArn+HTTP/1.1
URI Request Parameters
The request uses the following URI parameters.
- InsightArn
The ARN of the insight to delete.
Pattern:
.*\S.*
Required: Yes
Request Body
The request does not have a request body.
Response Syntax
HTTP/1.1 200 Content-type: application/json { "InsightArn": "string" }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- InsightArn
The ARN of the insight that was deleted.
Type: String
Pattern:
.*\S.*: | https://docs.aws.amazon.com/securityhub/1.0/APIReference/API_DeleteInsight.html | 2020-11-24T02:09:59 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.aws.amazon.com |
[−][src]Enum casperlabs_types::
system_contract_errors:: pos:: Error
Errors which can occur while executing the Proof of Stake contract.
Variants
The given validator is not bonded.
There are too many bonding or unbonding attempts already enqueued to allow more.
At least one validator must remain bonded.
Failed to bond or unbond as this would have resulted in exceeding the maximum allowed difference between the largest and smallest stakes.
The given validator already has a bond or unbond attempt enqueued.
Attempted to bond with a stake which was too small.
Attempted to bond with a stake which was too large.
Attempted to unbond an amount which was too large.
While bonding, the transfer from source purse to the Proof of Stake internal purse failed.
While unbonding, the transfer from the Proof of Stake internal purse to the destination purse failed.
Internal error: stakes were unexpectedly empty.
Internal error: the PoS contract's payment purse wasn't found.
Internal error: the PoS contract's payment purse key was the wrong type.
Internal error: couldn't retrieve the balance for the PoS contract's payment purse.
Internal error: the PoS contract's bonding purse wasn't found.
Internal error: the PoS contract's bonding purse key was the wrong type.
Internal error: the PoS contract's refund purse key was the wrong type.
Internal error: the PoS contract's rewards purse wasn't found.
Internal error: the PoS contract's rewards purse key was the wrong type.
Internal error: failed to deserialize the stake's key.
Internal error: failed to deserialize the stake's balance.
The invoked PoS function can only be called by system contracts, but was called by a user contract.
Internal error: while finalizing payment, the amount spent exceeded the amount available.
Internal error: while finalizing payment, failed to pay the validators (the transfer from the PoS contract's payment purse to rewards purse failed).
Internal error: while finalizing payment, failed to refund the caller's purse (the transfer from the PoS contract's payment purse to refund purse or account's main purse failed).
PoS contract's "set_refund_purse" method can only be called by the payment code of a deploy, but was called by the session code.
Trait Implementations
impl CLTyped for Error[src]
impl Clone for Error[src]
impl Copy for Error[src]
impl Debug for Error[src]
impl Display for Error[src]
impl Eq for Error[src]
impl Fail for Error[src]
fn name(&self) -> Option<&str>[src]
fn cause(&self) -> Option<&dyn Fail>[src]
fn backtrace(&self) -> Option<&Backtrace>[src]
fn context<D>(self, context: D) -> Context<D> where[src]
D: Display + Send + Sync + 'static,
D: Display + Send + Sync + 'static,
fn compat(self) -> Compat<Self>[src]
impl From<Error> for ApiError[src]
impl From<Error> for Error[src]
impl PartialEq<Error> for Error[src]
impl StructuralEq for Error[src]
impl StructuralPartialEq for Error[src]
impl ToBytes for Error[src]
Auto Trait Implementations
impl RefUnwindSafe for Error
impl Send for Error
impl Sync for Error
impl Unpin for Error
impl UnwindSafe for Error
Blanket Implementations
impl<T> Any for T where[src]
T: 'static + ?Sized,
T: 'static + ?Sized,
impl<T> AsFail for T where[src]
T: Fail,
T: Fail,> ToOwned for T where[src]
T: Clone,
T: Clone,
type Owned = T
The resulting type after obtaining ownership.
fn to_owned(&self) -> T[src]
fn clone_into(&self, target: &mut T)[src]
impl<T> ToString for T where[src]
T: Display + ?Sized,
T: Display + ?Sized,>, | https://docs.rs/casperlabs-types/0.6.1/casperlabs_types/system_contract_errors/pos/enum.Error.html | 2020-11-24T01:16:19 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.rs |
Documentation Conventions
- The C’s
- Terminology table
- Line breaks
- Resources
This documentation provides a writing style guide that portrays professionalism and efficiency in delivering technical content in Alluxio documentation.
The C’s
The C’s, in order of importance:
- Be correct
- Be concise
- Be consistent
- Be ceremonial or formal (because ceremonial was the best synonym to formal that started with a C)
Correctness = Don’t be wrong
No documentation is better than incorrect documentation.
- Information conveyed is accurate
- Use a spell checker to fix typos
- Capitalize acronyms
- Ex. AWS, TTL, UFS, API, URL, SSH, I/O
- Capitalize proper nouns
- Ex. Alluxio, Hadoop, Java
Conciseness = Don’t use more words than necessary
No one wants to read more words than necessary.
- Use the imperative mood, the same tone used when issuing a command
- “Run the command to start the process”
- Not “Next, you can run the command to start the process”
- “Include a SocketAppender in the configuration…”
- Not “A SocketAppender can be included in the configuration…”
- Use the active voice
- “The process fails when misconfigured”
- Not “The process will fail when misconfigured”
- Not “It is known that starting the process will fail when misconfigured”
- Don’t use unnecessary punctuation
- Avoid using parentheses to de-emphasize a section
- Incorrect example: “Alluxio serves as a new data access layer in the ecosystem, residing between any persistent storage systems (such as Amazon S3, Microsoft Azure Object Store, Apache HDFS, or OpenStack Swift) and computation frameworks (such as Apache Spark, Presto, or Hadoop MapReduce).”
- Reduce the use of dependent clauses that add no content
- Remove usages of the following:
- For example, …
- However, …
- First, …
Consistency = Don’t use different forms of the same word or concept
There are many technical terms used throughout; it can potentially cause confusion when the same idea is expressed in multiple ways.
- See terminology table below
- When in doubt, search to see how similar documentation expresses the same term
- Code-like text should be annotated with backticks
- File paths
- Property keys and values
- Bash commands or flags
- Code blocks should be annotated with the associated file or usage type, e.g.:
```javafor Java source code
```propertiesfor a Java property file
```consolefor an interactive session in shell
```bashfor a shell script
- Alluxio prefixed terms, such as namespace, cache, or storage, should be preceded by “the” to differentiate from the commonly used term, but remain in lowercase if not a proper noun
- Ex. The data will be copied into the Alluxio storage.
- Ex. When a new file is added to the Alluxio namespace, …
- Ex. The Alluxio master never reads or writes data directly …
Formality = Don’t sound like a casual conversation
Documentation is not a conversation. Don’t follow the same style as you would use when chatting with someone.
- Use the serial comma, also known as the Oxford comma, when listing items
- Example: “Alluxio integrates with storage systems such as Amazon S3, Apache HDFS, and Microsoft Azure Object Store.” Note the last comma after “HDFS”.
- Avoid using contractions; remove the apostrophe and expand
- Don’t -> Do not
- One space separates the ending period of a sentence and starting character of the next sentence; this has been the norm as of the 1950s.
- Avoid using abbreviations
- Doc -> Documentation
Terminology table
Line breaks
Each sentence starts in a new line for ease of reviewing diffs. We do not have an official maximum characters per line for documentation files, but feel free to split sentences into separate lines to avoid needing to scroll horizontally to read. | https://docs.alluxio.io/os/user/stable/en/contributor/Documentation-Conventions.html | 2020-08-03T12:42:31 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.alluxio.io |
This step is necessary to platform to your SugarCRM instance. This is one of the steps in the authentication process of SugarCRM component.
If using a version of SugarCRM released after Winter ‘18, you must register a platform on your SugarCRM instance
Go back and authenticate the on platform UI. | https://docs.elastic.io/components/sugarcrm/register-sugarcrm-value | 2020-08-03T11:32:37 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.elastic.io |
Cloud Services
Overview
Mixed Reality cloud services like Azure Remote Rendering and Azure Spatial Anchors help developers build compelling immersive experiences on a variety of platforms. These services allow you to integrate spatial awareness into your projects when you're making applications for 3D training, predictive equipment maintenance, and design review, all in the context of your users’ environments.
In addition, there are other Azure Services that you can easily add into your existing projects that don't fall under the Mixed Reality umbrella. If you're developing for Unity, we have a wide range of tutorials listed at the bottom of this page to get you started.
Mixed Reality services
Azure Remote Rendering
Azure Remote Rendering (ARR) is a service that lets you to render highly complex 3D models in real time. ARR is currently in public preview. It can be added to your Unity or Native C++ projects targeting HoloLens 2 or Windows desktop PC.
Azure Spatial Anchors
Azure Spatial Anchors (ASA) is a cross-platform service that allows you to build spatially aware mixed reality applications. With Azure Spatial Anchors, you can map, persist, and share holographic content across multiple devices, at real-world scale.
The service can be developed in a host of environments and deployed to a large group of devices and platforms. This gives them special dispensation for their own list of available platforms:
- Unity for HoloLens
- Unity for iOS
- Unity for Android
- Native iOS
- Native Android
- C++/WinRT and DirectX for HoloLens
- Xamarin for iOS
- Xamarin for Android
Standalone services
The standalone services listed below do not apply to Mixed Reality, but can be helpful in a wide range of development contexts. If you're developing in Unity, each of these services can integrated into your new or existing projects. | https://docs.microsoft.com/en-gb/windows/mixed-reality/mixed-reality-cloud-services | 2020-08-03T13:10:25 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['images/showcase-app.png',
'Example of Azure Remote Rendering in Unity showcase app'],
dtype=object)
array(['images/persistence.gif', 'Example of Azure Spatial Anchors'],
dtype=object) ] | docs.microsoft.com |
This package contains resource adaptor types and resource adaptors for:
Diameter CCA
Diameter Ro
Diameter Sh
Diameter Rf
Diameter Gx
Diameter Base
Each resource adaptor is supplied with:
The resource adaptor deployable unit
Source code for an example service
Ant scripts to deploy the resource adaptor and example service to Rhino
Resource adaptor type API Javadoc
A standalone simulator that can respond to Diameter requests (not included for Base RA) | https://docs.rhino.metaswitch.com/ocdoc/books/devportal-downloads/1.0/downloads-index/diameter.html | 2020-08-03T11:29:07 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.rhino.metaswitch.com |
System built-ins
Flux contains many preassigned values. These preassigned values are defined in the source files for the various built-in packages. . TypeExpression = MonoType ["where" Constraints] . MonoType = Tvar | Basic | Array | Record | Function . Tvar = "A" … "Z" . Basic = "int" | "uint" | "float" | "string" | "bool" | "time" | "duration" | "bytes" | "regexp" . Array = "[" MonoType "]" . Record = ( "{" [Properties] "}" ) | ( "{" Tvar "with" Properties "}" ) . Function = "(" [Parameters] ")" "=>" MonoType . Properties = Property { "," Property } . Property = identifier ":" MonoType . Parameters = Parameter { "," Parameter } . Parameter = [ "<-" | "?" ] identifier ":" MonoType . Constraints = Constraint { "," Constraint } . Constraint = Tvar ":" Kinds . Kinds = identifier { "+" identifier } .
Example
builtin filter : (<-tables: [T], fn: (r: T) -> bool) -> . | https://v2.docs.influxdata.com/v2.0/reference/flux/language/system-built-ins/ | 2020-08-03T12:57:02 | CC-MAIN-2020-34 | 1596439735810.18 | [] | v2.docs.influxdata.com |
Date: Mon, 3 Aug 2020 05:31:07 -0700 (PDT) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_134214_1919094546.1596457867436" ------=_Part_134214_1919094546.1596457867436 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
DRAFT DOCUMENT
The Zuora connector = ;allows you to access the Zuora REST API th= rough WSO2 ESB. Zuora's cloud technologies help companies build s= ubscription business models by establishing, cultivating and monetizing rec= urring customer relationships.
To get started, go to Configuring Zuora Operations.= p>
For general information on using connectors and their operations in your= ESB configurations, see Using a Conne= ctor. To download the connector, go to r/zuora, and then add and enable the connector in your ESB instance.= p> | https://docs.wso2.com/exportword?pageId=50495519 | 2020-08-03T12:31:07 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.wso2.com |