content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
#include <wx/event.h> This event is sent just after the actual window associated with a wxWindow object has been created. Since it is derived from wxCommandEvent, the event propagates up the window hierarchy. The following event handler macros redirect the events to member function handlers 'func' with prototypes like: Event macros: wxEVT_CREATEevent. Constructor. Return the window being created.
https://docs.wxwidgets.org/3.0/classwx_window_create_event.html
2018-12-10T06:34:01
CC-MAIN-2018-51
1544376823318.33
[]
docs.wxwidgets.org
Cloud Cruiser’s Amazon Web Services (AWS) solution retrieves consumption data using the AWS Detailed Billing Report API. The template workbook provided for it uses passthrough rates for AWS services, passing along the rates and charges set by AWS, but you have the ability to set your own rates instead. For more detailed information about collecting Amazon Web Services data, see the following topics: converts them to HPE Consumption Analytics Portal record files. These files are named aws-detailed-<accountId>-<year>-<month>_<utcTimestamp>.csv.zip, for example aws-detailed-719988776655-2014-9_20140903T200458.csv.zip. AWS generates a detailed billing report every hour. For complete information about the contents of a detailed billing report, see Understand Your Usage with Detailed Billing Reports in the AWS documentation. For information about the information HPE Consumption Analytics Portal collects from AWS, see Dimensions and measures collected from AWS. Because AWS can unpredictably add or change the services it bills for, there is no way for you to create a static set of services in HPE Consumption Analytics Portal that would remain able to match those services and charge your customers for all AWS usage. To avoid collecting measures that go uncharged, you must create new services from your collected AWS data before publishing that data. The template workbook for AWS includes a flow that does this. If you want to use the CloudSmart-Now solution for AWS, you must complete the following tasks: Before creating an Amazon Web Services collection, you must set up your AWS account to work with the HPE Consumption Analytics Portal collector. To set up your AWS account The CloudSmart-Now configuration for AWS is configured to work with minimal configuration. If the default configuration does not suit your needs, see the information in Advanced AWS collection options to guide your decisions and changes. To collect, transform, and publish AWS data AWS1to work with your AWS account: AWS1data source. AWS Resource Ownershipdata source. For more information about tracking resource ownership, see Assigning resource ownership. UseOwnershipFileparameter to true. After configuring HPE Consumption Analytics Portal to collect, transform, and publish your AWS data, create a schedule that defines when and how often to run those processes. You need two schedules for this workbook, one for daily data and one for monthly data. To schedule regular daily collection AWS Daily. Daily schedule for AWS data. Daily. 22:00. Previous Day. AWS1and ResourceOwnershipcollections, and the following flows: Ownership ImportServices PublishUsage ImportCustomers HandleUnknowns Cleanup To schedule regular monthly collection AWS Monthly. Monthly schedule for AWS data. Monthly. 22:45. Previous Day. AWS_EndOfMonthand ResourceOwnershipcollections, and the following flows: Ownership ImportServices PublishEndOfMonthUsage ImportCustomers HandleUnknowns
https://docs.consumption.support.hpe.com/CC4/03Collecting%2C_transforming%2C_and_publishing/Amazon_Web_Services
2018-12-10T06:08:07
CC-MAIN-2018-51
1544376823318.33
[]
docs.consumption.support.hpe.com
SQL Server Backup and Restore with Microsoft Azure Blob Storage Service. This feature has been enhanced in SQL Server 2016 (13.x) to provide increased performance and functionality through the use of block blobs, Shared Access Signatures, and striping. Note For SQL Server versions previous to SQL Server 2012 SP1 CU2, you can use the add-in SQL Server Backup to Microsoft Azure Tool to quickly and easily create backups to Microsoft Azure storage. For more information, see download center.. Benefits of Using the Microsoft Azure Blob Service for SQL Server Backups Flexible, reliable, and limitless off-site storage: Storing your backups on Microsoft. Important Through the use of block blobs in SQL Server 2016 (13.x), you can stripe your backup set to support backup files sizes up to 12.8 TB.. Microsoft Azure Billing Considerations:. See Also SQL Server Backup to URL Best Practices and Troubleshooting Back Up and Restore of System Databases (SQL Server) Tutorial: Using the Microsoft Azure Blob storage service with SQL Server 2016 databases
https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-and-restore-with-microsoft-azure-blob-storage-service?view=sql-server-2017
2018-12-10T06:27:44
CC-MAIN-2018-51
1544376823318.33
[array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'], dtype=object) array(['../../includes/media/no.png?view=sql-server-2017', 'no'], dtype=object) array(['../../includes/media/no.png?view=sql-server-2017', 'no'], dtype=object) array(['../../includes/media/no.png?view=sql-server-2017', 'no'], dtype=object) array(['media/backup-to-azure-blob-graphic.png?view=sql-server-2017', 'Backup to Azure blob graphic Backup to Azure blob graphic'], dtype=object) ]
docs.microsoft.com
Settings¶ Introduction¶ A typical application has some settings: if an application logs, a setting is the path to the log file. If an application.App.settings property: app.settings.logging.logfile Remember that the current application is also accessible from the request object: request.app..
https://morepath.readthedocs.io/en/0.15/settings.html
2018-12-10T06:03:12
CC-MAIN-2018-51
1544376823318.33
[]
morepath.readthedocs.io
On this page: Related pages: Each network request type has its own dashboard that graphically displays key performance indicators for that type over the selected time range. To select the time range, use the general time range dropdown at the top right of the UI. How the Network Request Dashboard is Organized The Network Request Dashboard displays summary key network request metrics for the time selected. To see any particular metric in the metric browser, click the metric value (in link blue). The trend graphs for the key performance indicators are: - Network Request Time: Average times in milliseconds. - Total Server Time: Displayed only If the mobile request is correlated with a server-side application.The total server time is the interval between the time that the server-side application receives the network request to the time that it finishes processing the request. Use this graph to determine, on average, how much time is spent on the network versus how much time is spent on the server to process the user's request. - Load: Total Requests and Requests per Minute. - Errors: Network Errors and HTTP Errors in total and per minute. - Related Business Transactions: If the request is correlated with a server-side application, the dashboard lists business transactions associated with the request below the performance metrics. You can click the link to a related business transaction to see its business transaction dashboard. If transaction snapshots were taken at the same time as the network request, the dashboard lists the transaction snapshots below the business transactions. See Transaction Snapshots. You can hover over any data point on any of the trend graphs to see the metric for a precise point:
https://docs.appdynamics.com/display/PRO45/Network+Request+Dashboard
2018-12-10T07:11:39
CC-MAIN-2018-51
1544376823318.33
[]
docs.appdynamics.com
API Gateway 7.6.2 Policy Developer Guide Configure LDAP directories Overview A filter that uses an LDAP directory to authenticate a user or retrieve attributes for a user must have an LDAP directory associated with it. You can use the Configure LDAP Server dialog to configure connection details of the LDAP directory. Both LDAP and LDAPS (LDAP over SSL) are supported. When a filter that uses an LDAP directory is run for the first time after a server refresh/restart, the server. General configuration Configure the following general LDAP connection settings: Name:Enter or select a name for the LDAP filter in the drop-down list. URL:Enter the URL location of the LDAP directory. The URL is a combination of the protocol (LDAP or LDAPS), the IP address of the host machine, and the port number for the LDAP service. By default, port 389 is reserved for LDAP connections, while port 636 is reserved for LDAPS connections. For example, the following are valid LDAP directory URLs: ldap://192.168.0.45:389 ldaps://145.123.0.28:636 Cache Timeout:Specifies the timeout for cached LDAP connections. Any cached connection that is not used in this time period is discarded. Defaults to 300000 milliseconds (5 minutes). A cache timeout of 0 means that the LDAP connection is cached indefinitely and never times out. Cache Size:Specifies the number of cached LDAP connections. Defaults to 8 connections. A cache size of 0 means that no caching is performed. Authentication configuration If the configured LDAP directory requires clients to authenticate to it, you must select the appropriate authentication method in the Authentication Type field. When the API Gateway connects to the LDAP directory, it is authenticated using the selected method. Choose one of the following authentication methods: Note If any of the following authentication methods connect to the LDAP server over SSL, that server's SSL certificate must be imported into the API Gateway certificate store. None No authentication credentials need to be submitted to the LDAP server for this method. In other words, the client connects anonymously to the server. Typically, a client is only allowed to perform read operations when connected anonymously to the LDAP server. It is not necessary to enter any details for this authentication method. Simple Simple authentication involves sending a user name and corresponding password in clear text to the LDAP server. Because the password is passed in clear text to the LDAP server, it is recommended to connect to the server over an encrypted channel (for example, over SSL). It is not necessary to specify a Realm for the Simple authentication method. The realm is only used when a hash of the password is supplied (for Digest-MD5). However, in cases where the LDAP server contains multiple realms, and the specified user name is present in more than one of these realms, it is at the discretion of the specific LDAP server as to which user name binds to it. Select the SSL Enabled checkbox to force the API Gateway to connect to the LDAP directory over SSL. To successfully establish SSL connections with the LDAP directory, you must import the directory's certificate into the API Gateway's certificate store. You can do this using the global Certificates window (see Manage X.509 certificates and keys). For LDAPS (LDAP over SSL) connections, the LDAP server's certificate must be imported into the Policy Studio JRE trusted store. For more details, see Test the LDAP connection. Digest-MD5 With Digest-MD5 authentication, the server generates some data and sends it to the client. The client encrypts this data with its password according to the MD5 algorithm. The LDAP server then uses the client's stored password to decrypt the data and hence authenticate the user. The Realm field is optional, but may be necessary in cases where the LDAP server contains multiple realms. If a realm is specified, the LDAP server attempts to authenticate the user for the specified realm only. External External authentication enables you to use client certificate-based authentication when connecting to an LDAP directory. When this option is selected, you must select a client certificate from the API Gateway certificate store. The SSL Enabled checkbox is selected automatically. Click the Signing Key button to select the client certificate to use to mutually authenticate to the LDAP server. Note This means that you must specify the URL field using LDAPS (for example, ldaps://145.123.0.28:636). The user name, password, and realm fields are not required for external authentication. Test the LDAP connection When you have specified all the LDAP connection details, you can click the Test Connection button to verify that the connection to the LDAP directory is configured successfully. This enables you to detect any configuration errors at design time, rather than at runtime. For LDAPS (LDAP over SSL) connections, the LDAP server's certificate must be imported into the Policy Studio JRE trusted store. You can do this by performing the following steps in Policy Studio: Select the Environment Configuration > Certificates and Keys > Certificates node in the Policy Studio tree. In the Certificates panel on the right, click Create/Import, and click Import Certificate. Browse to the LDAP server's certificate file, and click Open. Click Use Subject on the right of the Alias Name field, and click OK. The LDAP server's certificate is now imported into the Certificate Store, and must be added to the Java keystore. In the Certificates panel, select the certificates that you wish the JRE to trust. Click Export to Keystore, and browse to the cacerts file in the following directory: INSTALL_DIR\policystudio\Win32\jre\lib\security\cacerts Select the cacerts file. Click Save. You are prompted for a password. Click OK. Restart Policy Studio. You can now click Test Connection to test the connection to the LDAP directory server over SSL. Additional JNDI properties You can also specify optional JNDI properties as simple name-value pairs. Click the Add button to specify properties in the dialog. Related Links
https://docs.axway.com/bundle/APIGateway_762_PolicyDevGuide_allOS_en_HTML5/page/Content/PolicyDevTopics/common_ldap_conf.htm
2018-12-10T06:11:35
CC-MAIN-2018-51
1544376823318.33
[]
docs.axway.com
Visual Basic 6.0 Downloads Essential Downloads Code Advisor for Visual Basic 6.0 This application plugs-in to Visual Basic 6.0 to analyze your code and suggest possible improvements. Visual Basic 6.0 to Visual Basic .NET Upgrade Assessment Tool Analyze your Visual Basic 6.0 projects to determine what issues you will need to address to be able to upgrade. Interop Forms Toolkit 2.1 Used to enable phased migration, this free add-in for Visual Studio simplifies the process of displaying .NET forms and controls in a Visual Basic 6 application. Visual Basic Power Packs Contains a set of controls to use in .NET forms that are familiar to Visual Basic 6 developers. It includes a DataRepeater control, Line and Shape controls, a PrintForm component, and a Printer Compatibility. Additional Downloads Service Pack 6 for Visual Basic 6.0 provides the latest updates to Visual Basic 6.0. It is recommended for all users of Visual Basic 6.0. Service Pack 6 for Visual Basic 6.0: Run-Time Redistribution Pack vbrun60sp6.exe is a self-extracting executable file that installs versions of the Microsoft Visual Basic run-time files required by all applications created with Visual Basic 6.0. Microsoft Visual Basic 6.0 Common Controls Update for the Microsoft Visual Basic 6.0 Common Controls: mscomctl.ocx and comctl32.ocx. Visual Basic 6.0 Upgrade Samples Download Download additional controls, components and samples for Visual Basic 5.0 and 6.0. Microsoft Visual Studio: Installer 1.1 Visual.
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-basic-6/downloads
2018-12-10T06:06:04
CC-MAIN-2018-51
1544376823318.33
[]
docs.microsoft.com
Installing Multiple Instances of Reporting Services New: 14 April 2006 You can install multiple instances of Reporting Services on the same computer. Installing multiple instances is useful if you want to host different content for specific sites. Each instance will have its own report server database, configuration files, and virtual directories. If you want to share a single report server database among multiple instances and you are running SQL Server 2005 Enterprise Edition, you can configure a scale-out deployment. For more information about this deployment scenario, see Configuring a Report Server Scale-Out Deployment. The following configurations are possible for multi-instance installations: - Multiple instances of the same version and edition. Each report server instance connects to its own local or remote report server database. - Multiple instances of different versions and editions. For example, you can run SQL Server 2005 Express, SQL Server 2005 Developer Edition, and SQL Server 2000 Enterprise Edition on the same computer. Multiple instance support is provided through SQL Server 2005. If you are running SQL Server 2000 and 2005 on the same computer, you can have only one SQL Server 2000 Reporting Service instance, and it must run as the default instance (that is, as MSSQLSERVER). Each additional instance must be an edition of SQL Server 2005 and it must be installed as a named instance. Each instance of SQL Server is isolated from other instances that run on the same computer. You can install different versions and editions as separate instances on the same computer (for example, running the SQL Server 2000 Enterprise Edition and SQL Server 2005 Developer Edition as separate instances). When you run multiple instances of Reporting Services on the same computer, you can configure all the instances to use the default Web site in Internet Information Services (IIS). In this case, each report server instance is uniquely identified by its virtual directory. Alternatively, you can configure each instance to use a specific IP Address if you want to use non-default Web sites or route all report server requests through a specific IP address. Installing a Report Server Instance To install multiple instances of Reporting Services, you must run Setup once for each report server instance you want to install. You cannot install multiple instances at the same time. Installing a Report Server Default Instance The first time that you run Setup, you have the option of installing the first report server instance as a default installation if the computer meets the requirements. A default installation is one that uses all default values, resulting in a report server that is ready to use when Setup is complete. You can have only one default instance of Reporting Services on a single SQL Server instance. For more information about requirements for a default installation, see Default Configuration for a Report Server Installation. If you do not want to use the default values, or if your computer does not satisfy all the requirements for a default installation, you must install Reporting Services as a named instance. Installing a Report Server as a Named Instance If you are installing multiple report server instances on the same computer, each additional instance must be installed as a named instance. To install a named report server instance: - Specify the instance name in the Instance Name page of the Installation Wizard. - Select the Install but do not configure the server option on the Report Server Installation Options page. On the file system, the program files for each report server instance are kept in separate instance folders (for example, MSSQL.2, MSSQL.3, and so on). Setup creates the folders in the order in which the instance is installed. For more information about how to install multiple instances, see How to: Install SQL Server 2005 (Setup). Configuring a Report Server Instance To configure different report server instances on the same computer, you can use the Reporting Services Configuration tool to select the instance you want to configure. Reporting Services uses SQL Server instance naming conventions to identify each instance: - A default instance uses the default instance name of MSSQLServer. - A default instance of SQL Server Express is SQLExpress (note that SQL Server Express always installs as a named instance). - Other named instances are identified by the name that you provide at Setup. For more information about how to connect to a specific report server instance, see How to: Start Reporting Services Configuration. Accessing a Report Server Instance in a Multi-instance Deployment To access a particular instance of a report server or Report Manager, or to publish reports to a particular instance, you type the URL for the instance you want to use. The virtual directory for each instance must be unique. If you are creating a virtual directory for a named instance or default SQL Server Express instance, the Reporting Services Configuration tool automatically incorporates instance information to create a unique virtual directory name. For example, suppose you have a server named SERVER01 on which you installed a default report server instance, a named report server instance (identified as TestServer), and a SQL Server Express instance, the default URLs for these instances would be as follows: - - - Configuring a Report Server Instance to Use a Custom Web Site or Unique IP Address By default, report server instances are configured to the default Web site, where the IP address is mapped to (All Unassigned). You can use the default Web site with its existing configuration to host all the report server instances you install on the same computer. Alternatively, you can run each report server instance under a separate Web site. Using separate Web sites is required if you want to map a specific IP address for each report server instance. For more information about how to configure each report server instance to use a specific Web site or IP address, see How to: Configure Reporting Services to Use a Non-Default Web Site (Reporting Services Configuration). Change History See Also Reference Concepts Installing SQL Server Reporting Services Other Resources How to: Start Reporting Services Configuration Configuring Report Server Virtual Directories Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms403426(v=sql.90)?redirectedfrom=MSDN
2022-08-08T07:35:52
CC-MAIN-2022-33
1659882570767.11
[]
docs.microsoft.com
We use this cheatsheet to help us scope projects in a consistent way. What is this - What's the elevator pitch for the feature? What's the user value? - What are the most exciting tasks/stories we can tell for this feature? - Who is the primary audience? - Any docs deliverables you already have in mind? Dates - What are you working on right now? - When does this "release" (private beta, public beta, GA, etc.)? - If private beta, how many customers and do they need docs? Scope - Who will write first drafts? - Do you need any templates? - Does this need a liaison? Resources - Is there a test account/is this in staging? - Are there mockups or other resources? - Do you have any other collateral to share? People - Who is the primary reviewer (and backups)? - Who is product manager? - Who is lead dev? - Who is the designer? - Who is program manager? - Who is the researcher? Are we doing any user research? - Who is the PMM? - Who is the support point person? Before meeting ends - Who is writing? - When is it due? - Do we need tickets? - Who is following up with who? Tip We welcome thoughts or questions on our handbook! The easiest way to get in touch is to file a GitHub issue.
https://docs.newrelic.com/docs/agile-handbook/appendices/project-scoping-cheatsheet
2022-08-08T06:51:11
CC-MAIN-2022-33
1659882570767.11
[]
docs.newrelic.com
Tech doc writers (TW) are responsible for the docs. We write and edit the docs and exchange peer edits, maintain our backlog, and hero. To make it all happen, we use our Docs Team GitHub board to manage filed issues and pull requests (PRs). The who's who of an issue or PR GitHub tracks every stage of an issue or pull request's life cycle, including the people moving things along. Because the Docs team invite anyone to file an issue or create a pull request through GitHub, we strongly recommend verifying a filer's relationship to the New Relic GitHub organization. If you're uncertain if a contributor is a Relic, a good trick is to check if they're a member of the New Relic GitHub org. Generally, we recommend assuming a filer is external until proven otherwise. Who's who of an issue or PR: - Creator: This is the person who opens the issue or PR. The creator could be someone on the Docs team, another Relic, or an external user. You'll label the issue or PR differently depending on who created it. - Assignee: This is the person who takes responsibility for an issue or PR and sees it through to publication. The Hero will assign non-TW issues and PRs to themselves, or they can take over an issue or PR from another writer. - Reviewer: This is the person who provides a peer edit of your code or document and then approves the changes. This is not necessarily the person responsible for that area or responsible for merging the commit. You can assign up to 100 reviewers to a given issue. Track issues in the board The docs board has the following columns: As a Hero, make sure you attend to the following throughout your day: - Check in with the previous Hero at the start of your day (especially on Monday at the start of the week). Don’t forget to sync with the BCN Hero if necessary. - Watch for incoming PRs in #docs_deploys, and check everything in the Needs triage column. Drag cards from that column to the appropriate column. Everyone on the team helps keep things moving: - All writers should keep an eye on the Needs review column. If a PR doesn't have a reviewer, you can pick it up. - When you are ready for any type of review (simple or complex), move your PR into Needs review. - Be sure to move PRs that are blocked by SMEs to Waiting on SME/Blocked. - Check Waiting on TW to merge to see if your PR feedback is finished. - After you incorporate peer feedback, merge the PR, and remove it from the board. Deal with references in GitHub (and the style guide) - Don't link to anything non-public from a public place. - You can reference Jira tickets, but reference tickets by issue key ( DOC-1234is ok) rather than a link ( not). - Don't mention traffic or usage numbers publicly. - Don't reference internal people by name. If they have a GH account, @mention their GH handle. If they don't, talk instead about teams ("talk to a browser team engineer" or "Support Engineer") rather than people. - You can mention the #help-documentation channel and hero. Merge releases into main work (or, when do we publish?) The Hero currently merges at 9 AM (morning), 12 PM (noon), and 3 PM (evening) Pacific. We merge release branches into main to avoid interuptions when someone merges into develop during a release. To learn more about this workflow, see the gitflow documentation in Atlassian. It's important to create a new release branch off of the develop branch. Before you create a new branch, make sure you're on develop. To start a release: - Create a branch of developusing Github Desktop by clicking Current Branch in the top header, New Branch in the dropdown, then selecting Develop. - Name the branch with this pattern: daily-release/mm-dd-yy-morning/noon/evening. For example, a daily release happening at 9am on October 27, 2021 would follow this style: daily-release/10-27-21-morning. - To push your changes, in GitHub Desktop click Push Origin. - Create a pull request into main from your new daily release branch by clicking Create Pull Request. This will open a pull request screen on github.com. Pull requests default to merging into develop, so select main as the basebranch in the left side of the page and then click Submit Pull Request. - Wait until all the checks complete, and then merge the pull request. All branches that follow the daily-release/mm-dd-yy-morning pattern are protected branches. Only admins can delete them. GitHub labels We apply labels to issues so we can better triage and track the health of our backlog: content: All issues use this label. contentindicates the issue is content-related rather than a design or engineering issue. pg_*: Docs team will always use this label to indicate product group. For full definitions, see the "Doc Jira and GitHub fields" doc in the internal team Google Drive. - Indicate who created the issue: from_internal: A Relic created it. from_external: A non-Relic opened the issue in the repo OR the issue came in through #customer-feedback process. from_tw: One of us created it (unless we were passing along #customer-feedback). - Optionally: docs-issue-candidate: Issues that are too large in scope for the docs team to handle without product team expertise. This label alerts the docs issues team to migrate these issues into the customer feedback channel where they will be triaged and sent to product teams. Jira’d: Issues that have a corresponding Jira ticket. Make sure you leave the Jira number in the comments of the issue (for example, DOC-1234). Every pull request needs these labels so we can see where our contributions come from: content: Always add, this indicates the PR is content-related rather than design or engineering. - Indicate who created the pull request: from_internal: A Relic created it. from_external: A user opened it in the repo OR it came in through #customer-feedback process. from_tw: One of us created it (unless we were passing along #customer-feedback). If the PR fixes an external issue, label it as from_twsince the work was done by a tech writer. Docs issues There's no hard and fast rule in choosing good candidates for docs issues. They could be anything too difficult for a docs hero to chase down, or anything that could benefit from deep engineering expertise. Some examples are code examples, requirements sections, and NRQL queries. Ultimately, docs issues helps the docs team collaborate with product teams to solve documentation and product issues. It lets us leverage product team expertise to solve as many issues as we can, regarldess of who filed them. The life cycle of a viable issues candidate looks something like this: - Someone files an issue through GitHub for the GitHub hero to evaluate it as a potential candidate. - The hero adds the docs-issue-candidatelabel. After, the hero archives the issue. - Every week, a designated docs team member assesses all issues with the docs-issue-candidatelabel. - If approved, the docs-issue-candidatelabel is changed to a docs-issuelabel. We then migrate the issue into the customer feedback Jira project. - The new jiras are triaged by PMs, then funneled to the correct project team. - The PM will give the ticket to an assignee, who will close it once complete.
https://docs.newrelic.com/docs/style-guide/writing-docs/writer-workflow/github-intro/
2022-08-08T07:19:07
CC-MAIN-2022-33
1659882570767.11
[]
docs.newrelic.com
Next: Persistent Variables, Up: Variables [Contents][Index] A global variable is one that may be accessed anywhere within Octave. This is in contrast to a local variable which can only be accessed outside of its current context if it is passed explicitly, such as by including it as a parameter when calling a function ( fcn (local_var1, local_var2)). A variable is declared global by using a global declaration statement. The following statements are all global declarations. global a global a b global c = 2 global d = 3 e f = 5 Note that the global qualifier extends only to the next end-of-statement indicator which could be a comma (‘,’), semicolon (‘;’), or newline (‘'\n'’). For example, the following code declares one global variable, a, and one local variable b to which the value 1 is assigned. global a, b = 1 the one universal variable. For example, global x function f () x = 1; endfunction f () does not set the value of the global variable x to 1. Instead, a local variable, with name x, is created and assigned the value. Programming Note: While global variables occasionally are the right solution to a coding problem, modern best practice discourages their use. Code which relies on global variables may behave unpredictably between different users and can be difficult to debug. This is because global variables can introduce systemic changes so that localizing a bug to a particular function, or to a particular loop within a function, becomes difficult. Return true if name is a globally visible variable. For example: global x isglobal ("x") ⇒ 1 See also: isvarname, exist. Next: Persistent Variables, Up: Variables [Contents][Index]
https://docs.octave.org/interpreter/Global-Variables.html
2022-08-08T08:12:19
CC-MAIN-2022-33
1659882570767.11
[]
docs.octave.org
12.7. Source code tree layout There are a few notable top-level directories in the source tree: The main sub-projects: oshmem: Top-level OpenSHMEM code base ompi: The Open MPI code base opal: The OPAL code base config: M4 scripts supporting the top-level configurescript mpi.h etc: Some miscellaneous text files docs: Source code for Open MPI documentation examples: Trivial MPI / OpenSHMEM example programs 3rd-party: Included copies of required core libraries (either via Git submodules in Git clones or via binary tarballs). Note While it may be considered unusual, we include binary tarballs (instead of Git submodules) for 3rd party projects that are: Needed by Open MPI for correct operation, and Not universally included in OS distributions, and Rarely updated. Each of the three main source directories ( oshmem, ompi, and opal) generate at least a top-level library named liboshmem, libmpi, and libopen-pal, respectively. They can be built as either static or shared libraries. Executables are also produced in subdirectories of some of the trees. Each of the sub-project source directories have similar (but not identical) directory structures under them: class: C++-like “classes” (using the OPAL class system) specific to this project include: Top-level include files specific to this project mca: MCA frameworks and components specific to this project runtime: Startup and shutdown of this project at runtime tools: Executables specific to this project (currently none in OPAL) util: Random utility code There are other top-level directories in each of the sub-projects, each having to do with specific logic and code for that project. For example, the MPI API implementations can be found under ompi/mpi/LANGUAGE, where LANGUAGE is c or fortran. The layout of the mca trees are strictly defined. They are of the form: PROJECT/mca/FRAMEWORK/COMPONENT To be explicit: it is forbidden to have a directory under the mca trees that does not meet this template (with the exception of base directories, explained below). Hence, only framework and component code can be in the mca trees. That is, framework and component names must be valid directory names (and C variables; more on that later). For example, the TCP BTL component is located in opal/mca/btl/tcp/. The name base is reserved; there cannot be a framework or component named base. Directories named base are reserved for the implementation of the MCA and frameworks. Here are a few examples (as of the v5.1.x source tree): # Main implementation of the MCA opal/mca/base # Implementation of the btl framework opal/mca/btl/base # Implementation of the sysv framework oshmem/mcs/sshmem/sysv # Implementation of the pml framework ompi/mca/pml/base Under these mandated directories, frameworks and/or components may have arbitrary directory structures, however.
https://docs.open-mpi.org/en/main/developers/source-code-tree-layout.html
2022-08-08T07:30:53
CC-MAIN-2022-33
1659882570767.11
[]
docs.open-mpi.org
; By clicking on the tool icon in the ToolBox,. Normally, tool options are displayed in a window attached under the Toolbox as soon as you activate a tool. If they are not, you can access them from the image menu bar through→ → which opens the option window of the selected tool. Common select options. All these options work exactly the same way, they were described for the rectangular selection already. See for Afsnit 2.2.4, “Værktøjsindstillinger” details.
https://docs.gimp.org/2.10/da/gimp-tool-ellipse-select.html
2022-08-08T08:06:42
CC-MAIN-2022-33
1659882570767.11
[]
docs.gimp.org
E- the type of the message public interface ITopic<E> extends DistributedObject When a member subscribes for a topic, it is actually registering for messages published by any member in the cluster, including the new members joined after you added the listener. Messages are ordered, meaning that listeners(subscribers) will process the messages in the order they are actually published. If cluster member M publishes messages m1, m2, m3...mn to a topic T, then Hazelcast makes sure that all of the subscribers of topic T will receive and process m1, m2, m3...mn in order. Since Hazelcast 3.5 it is possible to have reliable topics. Normally all topics rely on the shared eventing system and shared threads. With Hazelcast 3.5 it is possible to configure a topic to be reliable and to get its own Ringbuffer to store events and to get its own executor to process events. The events in the ringbuffer are replicated, so they won't get lost when a node goes down. destroy, getDestroyContextForTenant, getPartitionKey, getServiceName String getName() getNamein interface DistributedObject void publish(@Nonnull E message) message- the message to publish to all subscribers of this topic TopicOverloadException- if the consumer is too slow (only works in combination with reliable topic) CompletionStage<Void> publishAsync(@Nonnull E message) message- the message to publish asynchronously to all subscribers of this topic @Nonnull UUID addMessageListener(@Nonnull MessageListener<E> listener) MessageListener.onMessage(Message)method of the given MessageListener is called. More than one message listener can be added on one instance. See ReliableMessageListenerto better integrate with a reliable topic. listener- the MessageListener to add NullPointerException- if listener is null boolean removeMessageListener(@Nonnull UUID registrationId) If the given listener already removed, this method does nothing. registrationId- ID of listener registration trueif registration is removed, falseotherwise @Nonnull LocalTopicStats getLocalTopicStats() void publishAll(@Nonnull Collection<? extends E> messages) throws ExecutionException, InterruptedException messages- the messages to publish to all subscribers of this topic TopicOverloadException- if the consumer is too slow (only works in combination with reliable topic) ExecutionException InterruptedException CompletionStage<Void> publishAllAsync(@Nonnull Collection<? extends E> messages) messages- the messages to publish asynchronously to all subscribers of this topic
https://docs.hazelcast.org/docs/5.0.3/javadoc/com/hazelcast/topic/ITopic.html
2022-08-08T06:30:13
CC-MAIN-2022-33
1659882570767.11
[]
docs.hazelcast.org
9.3. Launching only on the local node It is common to develop MPI applications on a single workstation or laptop, and then move to a larger parallel / HPC environment once the MPI application is ready. Open MPI supports running multi-process MPI jobs on a single machine. In such cases, you can simply avoid listing a hostfile or remote hosts, and simply list a number of MPI processes to launch. For example:.3.1. MPI communication When running on a single machine, Open MPI will most likely use the ob1 PML and the following BTLs for MPI communication between peers: self: used for sending and receiving loopback MPI messages — where the source and destination MPI process are the same. sm: used for sending and receiving MPI messages where the source and destination MPI processes can share memory (e.g., via SYSV or POSIX shared memory mechanisms).
https://docs.open-mpi.org/en/main/running-apps/localhost.html
2022-08-08T07:34:45
CC-MAIN-2022-33
1659882570767.11
[]
docs.open-mpi.org
Application Modules ABP is a modular application framework which consists of dozens of NuGet & NPM packages. It also provides a complete infrastructure to build your own application modules which may have entities, services, database integration, APIs, UI components and so on. There are two types of modules. They don't have any structural difference but. Open Source Application Modules There are some free and open source application modules developed and maintained as a part of the ABP Framework. - Account: Provides UI for the account management and allows user to login/register to the application. - Audit Logging: Persists audit logs to a database. - Background Jobs: Persist background jobs when using the default background job manager. - CMS Kit: A set of reusable Content Management System features. - Docs: Used to create technical documentation website. ABP's own documentation already using this module. - Feature Management: Used to persist and manage the features. - Identity: Manages organization units, roles, users and their permissions, based on the Microsoft Identity library. - IdentityServer: Integrates to IdentityServer4. - Permission Management: Used to persist permissions. - Setting Management: Used to persist and manage the settings. - Tenant Management: Manages tenants for a multi-tenant application. - Virtual File Explorer: Provided a simple UI to view files in virtual file system. See the GitHub repository for source code of all modules. Commercial Application Modules ABP Commercial license provides additional pre-built application modules on top of the ABP framework. See the module list provided by the ABP Commercial.
https://docs.abp.io/en/abp/5.0/Modules/Index
2022-08-08T07:09:52
CC-MAIN-2022-33
1659882570767.11
[]
docs.abp.io
Installing the Unity SDK To ensure that you are using the latest version of Unity Ads, we recommend that you install the SDK package from the Unity Editor Package Manager. Note: Disabling domain reloading in Play Mode Options is not supported for the Unity Ads package. In the Unity Editor, select Window > Package Manager. In the Package Manager window, select the Advertisement package from the list, and then select the most recent verified version. Select Install or Update. If you created your project through the Unity Editor flow, you must also link it to a Unity Dashboard Project ID before it can access Unity Gaming Services. See Configuring projects for Unity Gaming Services. Next steps: To continue your integration, see Initializing the SDK in Unity.
https://docs.unity.com/ads/InstallingTheUnitySDK.html
2022-08-08T08:03:00
CC-MAIN-2022-33
1659882570767.11
[]
docs.unity.com
Radiometric Temperature Measurements in Nongray Ferropericlase With Pressure- Spin- and Temperature-Dependent Optical Properties Lobanov, Sergey S. Speziale, Sergio DOI: Persistent URL: Persistent URL: Lobanov, Sergey S.; Speziale, Sergio, 2019: Radiometric Temperature Measurements in Nongray Ferropericlase With Pressure- Spin- and Temperature-Dependent Optical Properties. In: Journal of Geophysical Research: Solid Earth, Band 124, 12: 12825 - 12836, DOI: 10.1029/2019JB018668. Accurate temperature determination is central to measurements of physical and chemical properties in laser-heated (LH) diamond anvil cells (DACs). Because the optical properties of samples at high pressure-temperature (P-T) conditions are generally unknown, virtually all LH DAC studies employ the graybody assumption (i.e., wavelength-independent emissivity and absorptivity). Here we test the adequacy of this assumption for ferropericlase (13 mol.% Fe), the second most abundant mineral in the Earth's lower mantle. We model the wavelength-dependent emission and absorption of thermal radiation in samples of variable geometry and with absorption coefficients experimentally constrained at lower mantle P and P-T. The graybody assumption in LH DAC experiments on nongray ferropericlase contributes moderate systematic errors within ±200 K at 40, 75, and 135 GPa and T < 2300 K for all plausible sample geometries. However, at core-mantle boundary P-T conditions (135 GPa, 4000 K) the graybody assumption may underestimate the peak temperature in the DAC by up to 600 K in self-insulated samples due to selective light attenuation in highly opaque ferropericlase. Our results allow insights into the apparent discrepancy between available ferropericlase melting studies and offer practical guidance for accurate measurements of its solidus in LH DACs. More generally, the results of this work demonstrate that reliable temperature measurements in LH DACs require that the optical and geometrical properties of the samples are established. Statistik:View Statistics Collection This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
https://e-docs.geo-leo.de/handle/11858/9110
2022-08-08T07:20:50
CC-MAIN-2022-33
1659882570767.11
[]
e-docs.geo-leo.de
Caution You're viewing documentation for a previous version of Scylla Java Driver. Switch to the latest stable version. To execute a query, you create a Statement instance and pass it to Session#execute() or Session#executeAsync. The driver provides various implementations: SimpleStatement: a simple implementation built directly from a character string. Typically used for queries that are executed only once or a few times. BoundStatement: obtained by binding values to a prepared statement. Typically used for queries that are executed often, with different values. BuiltStatement: a statement built with the QueryBuilder DSL. It can be executed directly like a simple statement, or prepared. BatchStatement: a statement that groups multiple statements to be executed as a batch. Before executing a statement, you might want to customize certain aspects of its execution. Statement provides a number of methods for this, for example: Statement s = new SimpleStatement("select release_version from system.local"); s.enableTracing(); session.execute(s); If you use custom policies (RetryPolicy, LoadBalancingPolicy, SpeculativeExecutionPolicy…), you might also want to have custom properties that influence statement execution. To achieve this, you can wrap your statements in a custom StatementWrapper implementation. On this page
https://java-driver.docs.scylladb.com/scylla-3.11.0.x/manual/statements/
2022-08-08T08:04:32
CC-MAIN-2022-33
1659882570767.11
[]
java-driver.docs.scylladb.com
Welcome to Opytimizer’s documentation!¶ Did you ever reach a bottleneck in your computational experiments? Are you tired of selecting suitable parameters for a chosen technique? If yes, Opytimizer is the real deal! This package provides an easy-to-go implementation of meta-heuristic optimizations. From agents to a search space, from internal functions to external communication, we will foster all research related to optimizing stuff. Use Opytimizer if you need a library or wish to: - Create your own optimization algorithm; - Design or use pre-loaded optimization tasks; - Mix-and-match different strategies to solve your problem; - Because it is fun to optimize things. Opytimizer is compatible with: Python 3.6+. Package Reference - opytimizer - opytimizer.core - opytimizer.functions - opytimizer.math - opytimizer.optimizers - opytimizer.spaces - opytimizer.utils - opytimizer.visualization
https://opytimizer.readthedocs.io/en/latest/
2022-08-08T08:28:21
CC-MAIN-2022-33
1659882570767.11
[]
opytimizer.readthedocs.io
Table of Contents Product Index Grit…..that describes Eli HD for Holt 8 in one word. A Weird West hero without any of the traditional heroic characteristics, he's not chivalrous, he's not forgiving, he's not even a diamond in the rough… However, while he's hunting zombies and aliens, chasing outlaws, or running from the law himself he is still a leading man with a rugged side. As a Wild West Hero or a Post-Apocalyptic weather-beaten villain or a modern-day Urban Cowboy, his stallion a machine instead of a horse, he is ready for a starring role in your Weird West works of art! Eli HD for Holt 8 is an entirely new character featuring an ultra-detailed skin by Morris and a custom-sculpted HD Head and Body by Emrys built off the Holt 8 Shape. Enhancing the High Definition elements of the custom morphs are Normal Maps rendered from a cage obj with all the beautiful details of the Eli sculpted morphs transferred over and then combined with the skin textures these Normal Maps (along with hyper-realistic textures) bring Eli HD for Holt 8 to life! Eli's textures were built from custom photos, and an incredible amount of time went into ensuring all the details could hold up even under the closest of renders! Be sure to turn up the Subdivision Level because his custom sculpted morph comes to life with beautiful realism. Eli includes 5 realistic eye textures and a damaged right eye texture along with a LIE MAT pose preset to apply to all the other eye textures which are part of the facially scarred character preset. His scarred face textures are a reminder life isn't forgiving and some scars last a lifetime. He also includes 2 sets of tattoos-one torso tattoo taken directly from photos of skin and another set custom-built from photoshop brushes along with his beloved Amity’s name forever inked into his skin Eli also includes detailed fiber mesh brows that add depth to the face textures, plus a specially raised cornea morph that allows the light to reflect more accurately in renders. Pick up Eli HD for Holt 8 to see how his quality will make him a favorite for all your realistic renders, as he is in ours!.
http://docs.daz3d.com/doku.php/public/read_me/index/72195/start
2022-08-08T08:20:28
CC-MAIN-2022-33
1659882570767.11
[]
docs.daz3d.com
Using PUT method Use the Put method actions from the REST Web Service package to send requests to and receive responses from a REST API. This example uses endpoints from the Swagger Petstore sample API () to demonstrate using Put method action to update data to the Petstore database. Procédure - Use the Put method action to update the pet name to "Pluto" and pet status to "sold". - Double-click or drag the action. - Enter the following URI: - Open the log file saved in Using GET method and copy the Pet ID of the first entry. - Copy and paste the following into the Custom parameters field, replacing the text in the angle brackets with the value that you copied from the file: { "petId": 0, "name": "Pluto", "status": "sold" } - Insert the variable Outputin the Assign the output to a variable field. - Move the Message box action below the Put method action: - Double-click or drag the Message box action. - In the Enter text to log field, enter $Output{Body}$. - Click Save and then click Run.The bot retrieves the response body and prints it to the Message box. A successful response includes "name":"Pluto","status":"sold".
https://docs.automationanywhere.com/fr-FR/bundle/enterprise-v2019/page/enterprise-cloud/topics/aae-client/bot-creator/using-the-workbench/cloud-using-put-action.html
2022-08-08T06:59:36
CC-MAIN-2022-33
1659882570767.11
[]
docs.automationanywhere.com
Troubleshooting common problems¶ Solving problems hangs, on OS X with docker-machine or VirtualBox to run Docker¶ There is a known problem with VirtualBox shared folders: it is impossible for the VM to create a unix socket inside (for strange reasons). To solve this, you can mount instead your /Users directory using docker-machine-nfs: brew install docker-machine-nfs docker-machine-nfs default -f --nfs-config="-maproot=0" It is impossible to modify the course.yaml from the webdav interface¶ Some editors/webdav clients attempt to first move/delete a file before modifying it. It is forbidden to remove or rename the course.yaml, so the modification will fail. Use simpler editors (such as nano/vim) that directly edit the file rather than doing strange things.
https://docs.inginious.org/en/latest/admin_doc/install_doc/troubleshooting.html
2022-08-08T07:19:08
CC-MAIN-2022-33
1659882570767.11
[]
docs.inginious.org
4. Tutorial for developers of WC models and WC modeling tools¶ Developers should follow these steps to build and use WC modeling computing environments (Docker images and containers) to test, debug, and run WC models and WC modeling tools. Use wc_env_manager to pull existing WC modeling Docker images Use wc_env_manager to create Docker containers with volumes mounted from the host and installations of software packages contained on the house Run models and tools inside the Docker containers created by wc_env_manager 4.2. Building containers for WC modeling¶ Second, set the configuration for the containers created by wc_env_manager by creating a configuration file ./wc_env_manager.cfg following the schema outlined in /path/to/wc_env_manager/wc_env_manager/config/core.schema.cfg and the defaults in /path/to/wc_env_manager/wc_env_manager/config/core.default.cfg. - Configure additional Docker containers that should be run and linked to the main container. For example, the configuration below will generate a second container based on the postgres:10.5-alpineimage with the host name postgres_hostnameon the wc_networkDocker network and the environment variable POSTGRES_USERset to postgres_user. The main Docker image will also be added to the same wc_networkDocker network, which will make the second image accessible to the main image with the host name postgres_hostname. In this example, it will then be possible to login to the Postgres service from the main container with the command psql -h postgres_hostname -U postgres_user <DB>. - [wc_env_manager] - - [[network]] - name = wc_network [[[containers]]] - [[[[postgres_hostname]]]] - image = postgres:10.5-alpine [[[[[environment]]]]] POSTGRES_USER = postgres_user - Configure environment variables that should be set in the Docker container. The following example illustrates how to set the PYTHONPATHenvironment variable to the paths to wc_lang and wc_sim. Note, we recommend using pip to manipulate the Python path rather than directly manipulating the PYTHONPATHenvironment variable. We only recommend manipulating the PYTHONPATHenvironment variable for packages that don’t have setup.pyscripts or for packages that setup.pyscripts that you temporarily don’t want to run.:[wc_env_manager] [[container]] [[[environment]]] PYTHONPATH = '/root/host/Documents/wc_lang:/root/host/Documents/wc_utils' - Configure the host paths that should be mounted into the containers. Typically, this should including mounting the parent directory of your Git repositories into the container. For example, this configuration will map (a) the Documents directory of your host (${HOME}/Documents) to the /root/host/Documents directory of the container and (b) your the WC modeling configuration directory of your host (${HOME}/.wc) to the WC modeling configuration directory of the container (/root/.wc). ${HOME} will be substituted for the path to your home directory on your host.:[wc_env_manager] [[container]] [[[paths_to_mount]]] [[[[${HOME}/Documents]]]] bind = /root/host/Documents mode = rw [[[[${Home}/.wc]]]] bind = /root/.wc mode = rw - Configure the WC modeling packages that should be installed into wc_env. This should be specified in the pip requirements.txt format and should be specified in terms of paths within the container. The following example illustrates how to create editable installations of clones of wc_lang and wc_utils mounted from the host into the container.:[wc_env_manager] [[container]] python_package = ''' -e /root/host/Documents/wc_lang -e /root/host/Documents/wc_utils ''' - Configure additional command(s) that should be run when the main Docker container is created. These commands will be run within a bash shell. For example, this configuration could be used to create and import the content of a database when the container is created.:[wc_env_manager] [[container]] setup_script = ''' create db restore db ''' - Configure the ports that should be exposed by the container. The following example illustrates how to expose port 8888 as 8888.:[wc_env_manager] [[container]] [[[ports]]] 8888 = 8888 - Configure all credentials required by the packages and tools used by the container. These should be installed in config (*.cfg) files that can be accessed by wc-env-manager. ~/.wc is a standard location for whole-cell config files. Failure to install credentials will likely generate Authentication error exceptions. Docker images and containers may need to be cleaned up if wc-env-manager fails. See the docker command help for instructions. Third,. 4.3. Using containers to run WC models and WC modeling tools¶ Fourth, use the following command to execute the container. This launches the container and runs an interactive bash shell inside the container.: docker exec --interactive --tty <container_id> bash Fifth, 4.4. Using containers to develop WC models and WC modeling tools¶ Sixth, use command line programs inside the container, such as python, coverage or pytest, to run WC models and tools. Note, only mounted host paths will be accessible in the container. 4. 4
https://docs.karrlab.org/wc_env_manager/master/0.0.1/tutorial_developers.html
2022-08-08T07:45:05
CC-MAIN-2022-33
1659882570767.11
[]
docs.karrlab.org
miio.device module¶ - class miio.device.Device(ip: Optional[str] = None, token: Optional[str] = None, start_id: int = 0, debug: int = 0, lazy_discover: bool = True, timeout: Optional[int] = None, *, model: Optional[str] = None)[source]¶ Base class for all device implementations. This is the main class providing the basic protocol handling for devices using the miIOprotocol. This class should not be initialized directly but a device- specific class inheriting it should be used instead of it. - get_properties(properties, *, property_getter='get_prop', max_properties=None)[source]¶ Request properties in slices based on given max_properties. This is necessary as some devices have limitation on how many properties can be queried at once. If max_properties is None, all properties are requested at once. - send(command: str, parameters: Optional[Any] = None, retry_count: Optional[int] = None, *, extra_parameters=None) Any [source]¶ Send a command to the device. Basic format of the request: {“id”: 1234, “method”: command, “parameters”: parameters} extra_parameters allows passing elements to the top-level of the request. This is necessary for some devices, such as gateway devices, which expect the sub-device identifier to be on the top-level. - class miio.device.DeviceStatus[source]¶ Base class for status containers. All status container classes should inherit from this class. The __repr__ implementation returns all defined properties and their values.
https://python-miio.readthedocs.io/en/latest/api/miio.device.html
2022-08-08T07:42:04
CC-MAIN-2022-33
1659882570767.11
[]
python-miio.readthedocs.io
Quamotion WebDriver provides two ways to automate apps running on iOS and Android devices: App Automation and Device Automation. .apkor .ipafile) is modified, you need to have access to the original .apkor .ipafile, and the app bundle needs to be resigned. Some applications may include anti-tamper protection which will detect app resigning and block the app from launching. For this reason, we always recommend you try Device Automation first, and only switch to App Automation when required. On iOS, App Automation and Device Automation boil down to these two options: XCUIElement* Apps which contain custom controls, must ensure that these custom controls respect the Apple Accessibility Guidelines. Custom controls which fail to meet these guidelines, will be recognized as XCUIElementOther and very little information about these controls may be available to test automation scripts. To quote the Accessibility Programming Guide for iOS:. We generally recommend you use Device Automation on iOS, as there are a couple of limitations related to app automation: .ipafile. This means app automation does not work: .ipaof the app, but the scenario includes any of these actions, app automation will not work properly: In other words, in very specific corner cases (like apps which use custom controls, where the developers did not respect the Apple Guidelines) app automation may have an edge, these advantages are usually offset by the many limitations that app automation comes with.
http://docs.quamotion.mobi/webdriver/frontend/app-automation-vs-device-automation/
2019-09-15T07:53:03
CC-MAIN-2019-39
1568514570830.42
[]
docs.quamotion.mobi
chainer.functions.roi_pooling_2d¶ chainer.functions. roi_pooling_2d(x, rois, outh, outw, spatial_scale)[source]¶ Spatial Region of Interest (ROI) pooling function. This function acts similarly to max_pooling_2d(), but it computes the maximum, 5), and each datum is set as below: (batch_index, x_min, y_min, x_max, y_max). outh (int) – Height of output image after pooled. outw (int) – Width of output image after pooled. spatial_scale (float) – Scale of the roi is resized. - Returns Output variable. - Return type - See the original paper proposing ROIPooling: Fast R-CNN.
https://docs.chainer.org/en/stable/reference/generated/chainer.functions.roi_pooling_2d.html
2019-09-15T08:32:53
CC-MAIN-2019-39
1568514570830.42
[]
docs.chainer.org
API stability guarantees¶¶ Non-public API (experimental)¶ internal modules, which are not to be used, and will change without notice Corda incubating modules¶ - internal modules¶ Everything else is internal and will change without notice, even deleted, and should not be used. This also includes any package that has .internal in it. So for example, net.corda.core.internal and sub-packages should not be used. Some of the public modules may depend on internal modules, so be careful to not rely on these transitive dependencies. In particular, the testing modules depend on the node module and so you may end having the node in your test classpath. Warning The web server module will be removed in future. You should call Corda nodes through RPC from your web server of choice e.g., Spring Boot, Vertx, Undertow. The @DoNotImplement annotation¶.
https://docs.corda.net/head/corda-api.html
2019-09-15T08:03:36
CC-MAIN-2019-39
1568514570830.42
[]
docs.corda.net
Version 1.15 BannerBanner LauncherLauncher GeneralGeneral - Fixed an issue caused by unicode characters in INI files. - Made the Mod Launcher check to see if the game executable you selected is supported after browsing for one. Main WindowMain Window Removed the "Mods" tickbox. Launcher SettingsLauncher Settings - Added the "Game Visual Style" settings on the "Advanced" page. - Added the "Game DPI Aware" setting on the "Advanced" page. - Added the "Ignore Game Compatibility Layers" setting on the "Advanced" page. Mod FeaturesMod Features - Added Texttype settings. - Added a Textproperty for MultipleChoicetype settings. - Added a NoResetproperty for [Setting]sections. ModsMods Hack SupportHack Support - Fixed issues with making the game DPI aware. - Now adds the title of the current main mod to the Game window title if one is enabled. No Introduction MoviesNo Introduction Movies Removed this mod as it has been replaced by a hack that does the same thing. HacksHacks Aspect Ratio SupportAspect Ratio Support Added this new hack. This adds proper support for aspect ratios other than 4:3. Bug FixesBug Fixes Added this new hack. This adds various toggleable bug fixes. Cheat KeysCheatCustom Limits Added support for changing the maximum amount of Regions. No Introduction MoviesNo Introduction Movies Added this new hack to replace the old mod. ScreenshotsScreenshots Made this hack into a mod hack instead of it being always enabled. It still defaults to enabled. WidescreenWidescreen Removed this hack as it has been succeeded by Aspect Ratio Support.
https://docs.donutteam.com/docs/lucasmodlauncher/versions/version_1.15/
2019-09-15T07:35:41
CC-MAIN-2019-39
1568514570830.42
[array(['/img/lucasmodlauncher/version-banners/1.15.png', 'version 1.15 release banner'], dtype=object) ]
docs.donutteam.com
Q: When I type on the web console of a Windows Guest, I see the same key appearing multiple times. E.g. I type "hello" but in the Guest I see "hheeelllloo". A: Switch the keyboard type of the Guest to USB. PS2 keyboards are known to be problematic in Windows Guests.
https://docs.flexvdi.com/display/V31/Problems+with+flexVDI+Dashboard
2019-09-15T07:39:42
CC-MAIN-2019-39
1568514570830.42
[]
docs.flexvdi.com
View contents of a file that is being tracked with Change Tracking File content tracking allows you to view the contents of a file before and after a change that is being tracked with Change Tracking. To do this, it saves the file contents to a storage account after each change occurs. Requirements A standard storage account using the Resource Manager deployment model is required for storing file content. Premium and classic deployment model storage accounts should not be used. For more information on storage accounts, see About Azure storage accounts The storage account used can only have 1 Automation Account connected. Change Tracking is enabled in your Automation Account. Enable file content tracking In the Azure portal, open your Automation account, and then select Change tracking. On the top menu, select Edit Settings. Select File Content and click Link. This opens the Add Content Location for Change Tracking pane. Select the subscription and storage account to use to store the file contents to. If you want to enable file content tracking for all existing tracked files, select On for Upload file content for all settings. You can change this for each file path afterwards. Once enabled, the storage account and the SAS Uris are shown. The SAS Uris expire after 365 days, and can be recreated by clicking the Regenerate button. Add a file The following steps walk you through turning on change tracking for a file: On the Edit Settings page of Change Tracking, select either Windows Files or Linux Files tab, and click Add Fill out the information for the file path and select True under Upload file content for all settings. This setting enables file content tracking for that file path only. Viewing the contents of a tracked file Once a change has been detected for the file or a file in the path, it shows in the portal. Select the file change from the list of changes. The Change details pane is displayed. On the Change details page, you see the standard before and after file information, in the top left, click View File Content Changes to see the contents of the file. The new page shows you the file contents in a side-by-side view. You can also select Inline to see an inline view of the changes. Next steps Visit the tutorial on Change Tracking to learn more about using the solution: - Use Log searches in Azure Monitor logs to view detailed change tracking data. Feedback
https://docs.microsoft.com/en-us/azure/automation/change-tracking-file-contents
2019-09-15T08:33:19
CC-MAIN-2019-39
1568514570830.42
[]
docs.microsoft.com
All Files Adding or as, a layer in the Timeline view, and a node in the Node view. NOTETo learn more about the layer parameters, see Element / Drawing Node.
https://docs.toonboom.com/help/harmony-15/premium/timing/add-layer-column.html
2019-09-15T07:39:12
CC-MAIN-2019-39
1568514570830.42
[]
docs.toonboom.com
Performance Use the Performance module to display performance statistics for business units, sites, and activities. This module includes a Monitor view for the most recently completed timestep, an Intra-day table view, and an Alerts view. All views display the following statistics, which are calculated by the WFM servers: These statistics are presented for a 24-hour range, with 12 hours before, and 12 hours after, the current timestep. The displayed data is automatically refreshed at least once per minute. For timesteps in the future, the displayed "actual" values are those that WFM Web anticipates by comparing past data with the planned forecast. These anticipated values do not affect the schedule, and they are not saved. The anticipated values can help you make real-time changes to your site setup. For example, unneeded agents can be sent home if you are overstaffed, or extra agents can be called in if you are understaffed. You cannot change the schedule from the Performance views; use the Master Schedule Intra-Day view to do so. Feedback Comment on this article:
https://docs.genesys.com/Documentation/WM/8.5.2/SHelp/PerformanceMdl
2019-09-15T07:53:53
CC-MAIN-2019-39
1568514570830.42
[]
docs.genesys.com
Open Design¶ Common development cycle¶ OpenStack deliverables are released in various ways and on different time-based or feature-based schedules (see the Release Management chapter). What binds them all, however, is our common 6-month development cycle. OpenStack development cycles are named alphabetically (Austin, Bexar, Cactus, Diablo…) and may result in one or more releases. Stable branches (see the chapter on Stable Branches) are only cut once though, from the last release of a given deliverable for that development cycle. Forum¶ The Forum is an event held during the OpenStack Summits, every 6 months. They are a key part of the “Open Design” promise: it is where the various constituents of our community get together to bring feedback and discuss the future of OpenStack. Goals¶ The Forum has several goals: Close the feedback loop, by getting developers, operators and end users of OpenStack in the same room Get feedback on the last release Get priorities for the next development cycle For features at the early design stages, get feedback on early direction and quick convergence across a broad range of attendees The Forum is not for traditional speaker-to-audience presentations. They should all be open discussions about a specific technical topic. Therefore, usage of slides or microphones is discouraged, to reduce the distance between the session proposer/moderator and the rest of the audience. Pre-summit organization¶ The list of topics being discussed is selected by a selection committee including User Committee members, Technical Committee members and Foundation staff, based on input from the community. Topics are first brainstormed by each group on separate etherpads, then formally submitted for committee review. During the summit¶ We use etherpad.openstack.org to take notes during sessions. It is generally a good idea to prepare those etherpads in advance and list them on the common list of Forum etherpads. Each session should have a moderator to keep the discussion on track, and try to get to actionable outcomes before the end of the session. It is generally the person who suggested the session in the first place. As a courtesy to attendees, make sure to start your session on time and end your session on time, so that they can easily jump to another room. Vacate the room at the end of the session, and continue the discussion in the hallway if necessary. Project Teams Gathering¶ The Project Teams Gathering (or PTG) is an event held every 6 months, at the beginning of each development cycle. Attendance is key to the developer productivity during the rest of the cycle. Each room at the PTG is freely organized by each project team or workgroup. Goals¶ Bootstrap the upcoming development cycle, get alignment on the team priorities, define common objectives, assign tasks Make quick progress on issues that are difficult to solve otherwise, by getting the right set of people working on it together at the same time Meet in person with fellow contributors, address social issues, spend time together during the evening, reset relationships after heated online discussions Take advantage of cross-pollination, by taking time to discuss inter-project issues with members of other teams that will be present during the same week Sprints¶ In addition to Forums and PTGs, teams may meet in-person or virtually in specific sprints. For those, it is generally good to have a specific objective for the sprint and use it to get a specific goal done. They should be announced on the mailing-list and open to any contributor who wants to join. To enable people to focus on the same topic at the same time, without factoring in the monetary and life cost of travel, we also support Virtual Sprints held on IRC. See for details.
https://docs.openstack.org/project-team-guide/open-design.html
2019-09-15T07:32:40
CC-MAIN-2019-39
1568514570830.42
[]
docs.openstack.org
Integer and Float now support min, max and choices. Choices must respect min and max (if provided). Added Port type as an Integer in the closed interval [0, 65535]. Added minimum and maximum value limits to FloatOpt. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/releasenotes/oslo.config/newton.html
2019-09-15T07:33:43
CC-MAIN-2019-39
1568514570830.42
[]
docs.openstack.org
User Security A module in the WFM Configuration Utility that you can use to set security-access limitations for all nonagent users. You can limit access to specific sites, teams, activities, and so on. In some cases, you can configure some users to have view-only access to Workforce Management (WFM) objects, and other users to have full access. You can also grant permission to edit the schedule without having the changes incorporated into the official version of the schedule. See also Security Access, Pending Schedule Changes, and Modules Pane. Glossary User Security The Workforce Management (WFM) Configuration Utility User Security module enables you to fine-tune the precise access each user has to WFM Types.. Feedback Comment on this article:
https://docs.genesys.com/Documentation/WM/8.5.0/Admin/UserSec
2019-09-15T07:29:46
CC-MAIN-2019-39
1568514570830.42
[]
docs.genesys.com
Read replicas in Azure Database for MySQL The read replica feature allows you to replicate data from an Azure Database for MySQL server to a read-only server. You can replicate from the master server to up to five replicas. Replicas are updated asynchronously using the MySQL engine's native binary log (binlog) file position-based replication technology. To learn more about binlog replication, see the MySQL binlog replication overview. Replicas are new servers that you manage similar to regular Azure Database for MySQL servers. For each read replica, you're billed for the provisioned compute in vCores and storage in GB/ month. To learn more about MySQL replication features and issues, please see the MySQL replication documentation. When to use a read replica The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the master. A common scenario is to have BI and analytical workloads use the read replica as the data source for reporting. Because replicas are read-only, they don't directly reduce write-capacity burdens on the master. This feature isn't targeted at write-intensive workloads. The read replica feature uses MySQL asynchronous replication. The feature isn't meant for synchronous replication scenarios. There will be a measurable delay between the master and the replica. The data on the replica eventually becomes consistent with the data on the master. Use this feature for workloads that can accommodate this delay. Cross-region replication You can create a read replica in a different region from your master server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users. You can have a master server in any Azure Database for MySQL region. A master server can have a replica in its paired region or the universal replica regions. The picture below shows which replica regions are available depending on your master region. Universal replica regions You can always create a read replica in any of the following regions, regardless of where your master server is located. These are the universal replica regions: Australia East, Australia Southeast, Central US, East Asia, East US, East US 2, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, South Central US, Southeast Asia, UK South, UK West, West Europe, West US, West US 2. Paired regions In addition to the universal replica regions, you can create a read replica in the Azure paired region of your master server. If you don't know your region's pair, you can learn more from the Azure Paired Regions article. If you are using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency. However, there are limitations to consider: Regional availability: Azure Database for MySQL is available in West US 2, France Central, UAE North, and Germany Central. However, their paired regions are not available. Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South, and US Gov Virginia. This means that a master server in West India can create a replica in South India. However, a master server in South India cannot create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region is not West India. Create a replica If a master server has no existing replica servers, the master will first restart to prepare itself for replication. When you start the create replica workflow, a blank Azure Database for MySQL server is created. The new server is filled with the data that was on the master server. The creation time depends on the amount of data on the master and the time since the last weekly full backup. The time can range from a few minutes to several hours. Every replica is enabled for storage auto-grow. The auto-grow feature allows the replica to keep up with the data replicated to it, and prevent a break in replication caused by out of storage errors. Learn how to create a read replica in the Azure portal. Connect to a replica When you create a replica, it doesn't inherit the firewall rules or VNet service endpoint of the master server. These rules must be set up independently for the replica. The replica inherits the admin account from the master server. All user accounts on the master server are replicated to the read replicas. You can only connect to a read replica by using the user accounts that are available on the master server. You can connect to the replica by using its hostname and a valid user account, as you would on a regular Azure Database for MySQL server. For a server named myreplica with the admin username myadmin, you can connect to the replica by using the mysql CLI: mysql -h myreplica.mysql.database.azure.com -u myadmin@myreplica -p At the prompt, enter the password for the user account. Monitor replication Azure Database for MySQL provides the Replication lag in seconds metric in Azure Monitor. This metric is available for replicas only. This metric is calculated using the seconds_behind_master metric available in MySQL's SHOW SLAVE STATUS command. Set an alert to inform you when the replication lag reaches a value that isn’t acceptable for your workload. Stop replication You can stop replication between a master and a replica. After replication is stopped between a master server and a read replica, the replica becomes a standalone server. The data in the standalone server is the data that was available on the replica at the time the stop replication command was started. The standalone server doesn't catch up with the master server. When you choose to stop replication to a replica, it loses all links to its previous master and other replicas. There is no automated failover between a master and its replica. Important The standalone server can't be made into a replica again. Before you stop replication on a read replica, ensure the replica has all the data that you require. Learn how to stop replication to a replica. Considerations and limitations Pricing tiers Read replicas are currently only available in the General Purpose and Memory Optimized pricing tiers. Master server restart When you create a replica for a master that has no existing replicas, the master will first restart to prepare itself for replication. Please take this into consideration and perform these operations during an off-peak period. New replicas A read replica is created as a new Azure Database for MySQL server. An existing server can't be made into a replica. You can't create a replica of another read replica. Replica configuration A replica is created by using the same server configuration as the master. After a replica is created, several settings can be changed independently from the master server: compute generation, vCores, storage, and backup retention period. The pricing tier can also be changed independently, except to or from the Basic tier. Important Before a master server configuration is updated to new values, update the replica configuration to equal or greater values. This action ensures the replica can keep up with any changes made to the master. Stopped replicas If you stop replication between a master server and a read replica, the stopped replica becomes a standalone server that accepts both reads and writes. The standalone server can't be made into a replica again. Deleted master and standalone servers When a master server is deleted, replication is stopped to all read replicas. These replicas become standalone servers. The master server itself is deleted. User accounts Users on the master server are replicated to the read replicas. You can only connect to a read replica using the user accounts available on the master server. Server parameters To prevent data from becoming out of sync and to avoid potential data loss or corruption, some server parameters are locked from being updated when using read replicas. The following server parameters are locked on both the master and replica servers: The event_scheduler parameter is locked on the replica servers. Other - Global transaction identifiers (GTID) are not supported. - Creating a replica of a replica is not supported. - In-memory tables may cause replicas to become out of sync. This is a limitation of the MySQL replication technology. Read more in the MySQL reference documentation for more information. - Ensure the master server tables have primary keys. Lack of primary keys may result in replication latency between the master and replicas. - Review the full list of MySQL replication limitations in the MySQL documentation Next steps Feedback
https://docs.microsoft.com/en-us/azure/mysql/concepts-read-replicas
2019-09-15T08:15:34
CC-MAIN-2019-39
1568514570830.42
[]
docs.microsoft.com
Network Time¶ OPNsense ships with a standard NTPd server, which synchronizes time with upstream servers and provides time to connected clients. A newly installed firewall comes with NTP enabled on all interfaces (firewall blocks all non LAN access in this case), forwarding queries to one of the X.opnsense.pool.ntp.org upstreams ( X is any of 0,1,2,3). General settings¶ In most cases the default setup is ready to use, below you will find some of the general options which can be configured. Note NTPs is disabled if no Time servers are configured. There is no separate enable/disable toggle. GPS¶ If you own a gps receiver, which supports NMEA, you can use it as a reference clock and configure it in this section. For some brands settings are preconfigured, you can also use custom settings. PPS¶ If your GPS receiver supports PPS (Pulse Per Second) output or you have a separate PPS signal available, you can configure the serial port to use along with some other settings here.
https://docs.opnsense.org/manual/ntpd.html
2019-09-15T08:31:32
CC-MAIN-2019-39
1568514570830.42
[]
docs.opnsense.org
Cilium integration with Flannel (beta)¶ This guide contains the necessary steps to run Cilium on top of your Flannel cluster. If you have a cluster already set up with Flannel you will not need to install Flannel again. This Cilium integration with Flannel was performed with Flannel 0.10.0 and Kubernetes >= 1.9. If you find any issues with previous Flannel versions please feel free to reach out to us to help you. Note This is a beta feature. Please provide feedback and file a GitHub issue if you experience any problems. The feature lacks support of the following, which will be resolved in upcoming Cilium releases: - L7 policy enforcement Flannel installation¶ NOTE: If kubeadm is used, then pass --pod-network-cidr=10.244.0.0/16 to kubeadm init to ensure that the podCIDR is set. kubectl apply -f Wait until all pods to be in ready state before preceding to the next step. Cilium installation¶ Download the Cilium release tarball and change to the kubernetes install directory: curl -LO tar xzvf 1.6.1.tar.gz cd cilium-1.6.1/install/kubernetes Install Helm to prepare generating the deployment artifacts based on the Helm templates. Generate the required YAML file and deploy it: helm template cilium \ --namespace kube-system \ --set global.flannel.enabled=true \ > cilium.yaml Set global.flannel.uninstallOnExit=true if you want Cilium to uninstall itself when the Cilium pod is stopped. If the Flannel bridge has a different name than cni0, you must specify the name by setting global.flannel.masterDevice=.... Optional step: If your cluster has already pods being managed by Flannel, there is also an option available that allows Cilium to start managing those pods without requiring to restart them. To enable this functionality you need to set the value global.flannel.manageExistingContainers=true Once you have changed the ConfigMap accordingly, you can deploy Cilium. kubectl create -f cilium.yaml Cilium might not come up immediately on all nodes, since Flannel only sets up the bridge network interface that connects containers with the outside world when the first container is created on that node. In this case, Cilium will wait until that bridge is created before marking itself as Ready.
https://cilium.readthedocs.io/en/stable/gettingstarted/flannel-integration/
2019-09-15T08:20:34
CC-MAIN-2019-39
1568514570830.42
[]
cilium.readthedocs.io
Deployment Details Overview The Details page provides deployment details for each deployment that you submit when Deploy an Application. Accessing Deployment Details Once deployed, you can access the Deployment Details page in one of two ways: From the Deployments list page, by clicking on the deployment name as highlighted in the following screenshot: From the Virtual Machines list page, by clicking on the deployment button as highlighted in the following screenshot: Either way, you see the Details page as shown in the following screenshot: The Details page has three sections: The Header The Topology Panel (right) The Main Panel (left) The Header The App logo, state (stopped, running, suspended, etc.), favorites star, deployment name, access link (when applicable), and the job actions dropdown menu are included in the top part of this section. Click on the job actions dropdown icon to display a menu as visible in the following screenshot – the list of available actions vary based on the current job state. The suspend operation is not available for pure container or hybrid applications. The application name, environment name, and cloud region are listed in the lower left section and the truncated names expand on hover as displayed in the following screenshot: The run time, approx cost (hourly compute charges, projected monthly cost factoring in suspension policy savings, and on hover, the following details as well: projected monthly cost before savings from suspension policy and monthly savings - see figure below), accrued cost (since the job started), deployment owner are listed in the right side of this section. The following screenshot displays the cost factoring details. The Main Panel The Main panel has the following subsections: The Tiers Tab The Job Details Tab The History Tab The Tiers Tab The Tiers Tab contains a list of per-tier sub-panels: Up to 5 tiers display initially in the expanded mode (showing the tier header and tier main section). If more tiers exist a Load more button allows you to load 10 more tiers at a time. You can compress/expand each tier by clicking the triangular icon. The displayed information differs for VM-based tiers and container-based tiers. VM-Based Tiers The following screenshot displays a VM-based tier. Tier Header: Service logo, tier state (running, starting, etc.), tier name, approx cost (hourly compute charges, projected monthly cost factoring in suspension policy savings, on hover: projected monthly cost before savings and suspension policy. Bulk actions are supported on VMs – you can select multiple VMs and perform the action from a menu that appears on the top right corner of the table. Search VMs using keywords. Load More: Initially 5 VMs are loaded. You can click Load More to load 5 more VMs or the remaining VMs if this number is less than 5. The scaling actions dropdown is only present for tiers that scale when the tier is in the running state. Click the dropdown icon to the right of the tier state to display and select from the available choices for each tier as described in the following table: Both batch actions or single actions are available to terminate VMs. Tier Main Section: For each tier, the main section contains tabs for VMs and Details. The VMs Tab The VMs page is divided into sections with self explanatory filters and status details. The Status buttons display the VM count in a certain state. The buttons are displayed based on the state of VMs (running, stopped, terminated, error, and so forth) in the tier. The list of VMs details in a tier along with additional information when you hover over most fields. For example, hovering over the IP addresses entry displays a balloon with specific IP addresses as displayed in the following screenshot. Click the VM name to open a new tab showing VM details. Click the dropdown icon next to the VNC link to display the VM actions dropdown menu. The list of actions depends on the current VM state as displayed in the following screenshot. Click the View Task Logs to view a pop up window with the task log messages for that VM in reverse chronological order as as displayed in the following screenshot. You can select all or some of the VMs in the list using check boxes to the left of each VM entry. Click on at least one checkbox to view the dropdown menu icon to appear above the list of VMs and to the right as shown in the following screenshot. Click the dropdown icon to display the VM actions menu applicable to the least common denominator of all VMs selected. The VM action menu corresponding to the one selected error state VM is shown in the following screenshot. If there are no actions available for a particular VM, than there will not be an actions dropdown available. The VM Details Tab This tab contains basic information about the tier such as start time, stop time (if the tier has terminated), current policy settings and tags, and deployment parameters. If enabled in the deployment environment at the time of deployment, you have the ability to update policy settings and tags. The following screenshot is an example. Hover over the scaling policy's info icon to view the info balloon displayed in the following screenshot. You can change or remove an existing scaling policy or add one, if none is set, by clicking the dropdown menu icon next to the scaling policy label as displayed in the following screenshot. The Change Policy option displays a new dropdown list as displayed in the following screenshot that shows all available scaling policies for this tier based on restriction specified in the application profile's Topology Modeler tab. Select a policy and click Apply to replace the old policy with a new one. You can add/remove security policies or tags for a tier by clicking the corresponding Edit link as displayed in the following screenshot. Google Cloud Nuance Google Cloud does not support attachment of tags to VMs. Although the Workload Manager UI will allow tags to be specified, and shows success, tags are not added. Container-Based Tiers Container-based tiers differ based on the number of containers per pod. The following screenshot displays a single container per pod. The following screenshot displays multiple containers per pod (for more details on Placement Groups, see the Define Resource Placement). The tier header details are the same as the VM-based tier header except for the following factors: In Approx cost, the projected monthly cost does not factor in suspension policy savings because suspension policies cannot be applied to containerized tiers. In the Scaling actions dropdown, units are replicas, not VMs. The main section contains tabs for Replicas and Tier details. A placement group is represented by a rectangle that you add to the topology modeler canvas by clicking the Create A Group button. See Define Resource Placement > Container-Specific Resource Placement for additional details. The Replicas Tab The Replicas tab contains the following details: The Status buttons display the replica count in a certain state. The buttons are displayed based on the state of replica state (running, stopped, terminated, error, and so forth) in the tier. The list of replica details in a tier along with additional information when you hover over most fields. For example, hovering over the number of parameters provides additional parameter details as displayed in the following screenshot. The Container Details Tab The container tier details tab is similar to the VM tier details tab with the following exceptions. There are no scaling policies for container tiers. For Kubernetes container tiers, the namespace, and IP addresses and internal endpoints related to the ClusterIP, NodePort and LoadBalancer service types in the tier are listed. See the following screenshot for an example. The Details Tab General Information about deployment, for example, Approval/Start time and so forth are listed in the Details tab. This tab also contains actions related to Aging/Suspension/Security Policies (Remove/Change/Add) along with Tags, Global Parameters, and Metadata information. See the Policy Management section for additional details. The History Tab A list of all job state changes and policy changes since the job was deployed, in reverse chronological order are listed in the History tab. Besides viewing the details on this page, you can perform the following actions: Click the magnifying glass icon in the upper right to filter the list to entries that contain the text in the filter. Click the download icon in the upper right to download the complete history as a CSV file. The History tab provides details on actions taken or incidents that occurred during the life of this deployment. You can download the details on this page to a CSV file. The Topology Panel The Topology panel displays the application topology with status indicator lights (red, yellow, green) for each tier. Filter buttons appear above the topology diagram representing all tiers and tiers in various states: running, error. Click on a filter button includes only tiers in that state as displayed in the following screenshot. A colored dot on the upper left corner of each tier icon represents the status of the tier during the deployment process as described in the following table. Notifications For all actions, notifications are displayed at the top of the page. Three types of notifications are displayed in the details page: In Progress – stays on the page during the action. Success – stays for 7 seconds. Error – you must specifically acknowledge this notification. Three or more notifications of the same type are grouped together and you can click View Details to see details for this group. - No labels
https://docs.cloudcenter.cisco.com/display/WORKLOADMANAGER/Deployment+Details
2019-09-15T08:01:33
CC-MAIN-2019-39
1568514570830.42
[array(['/download/attachments/34145652/Screen%20Shot%202018-10-15%20at%2011.27.15%20PM.png?version=1&modificationDate=1542067530000&api=v2', None], dtype=object) array(['/download/attachments/34145652/Screen%20Shot%202018-10-15%20at%2011.28.18%20PM.png?version=1&modificationDate=1542067530000&api=v2', None], dtype=object) array(['/download/thumbnails/34145652/Screen%20Shot%202018-10-16%20at%201.54.11%20PM.png?version=1&modificationDate=1542067530000&api=v2', None], dtype=object) array(['/download/attachments/34145652/Screen%20Shot%202018-10-15%20at%2011.30.23%20PM.png?version=1&modificationDate=1542067531000&api=v2', None], dtype=object) array(['/download/attachments/34145652/Screen%20Shot%202018-10-15%20at%2011.34.55%20PM.png?version=1&modificationDate=1542067531000&api=v2', None], dtype=object) array(['/download/attachments/34145652/Screen%20Shot%202018-10-15%20at%2011.49.54%20PM.png?version=1&modificationDate=1542067530000&api=v2', None], dtype=object) array(['/download/attachments/34145652/Screen%20Shot%202018-10-15%20at%2011.50.19%20PM.png?version=1&modificationDate=1542067530000&api=v2', None], dtype=object) ]
docs.cloudcenter.cisco.com
Overview Commands are issued from the parent window (your website) to the JavaScript API. The API then relays the command via window.postMessage() to the iframed flipbook. Flipbook-related commands updateEventSettings It is possible to dictate how the iframed flipbook should handle events when they have been triggered. The method accepts a single argument that is an object containing the following keys, which references events that the v2 API listens to: The keys should contain an object of the following format: { preventDefault: <boolean> }, where <boolean> is true or false. When preventDefault is set to true, the flipbook will perform the default action associated with each event. If set to false, the default action will be skipped. An example use: // Disable opening of shop basket AND page element click on shop items iPaperAPI.updateEventSettings({ onBasketClick: { preventDefault: true }, onPageElementClick: { preventDefault: true } }); Publication-related commands goToPage Navigate to a specific page: the page number is to be provided as the one and only argument of the function. For example: iPaperAPI.goToPage(4); goToPreviousPage Go to the previous spread (if flipbook has a spread-based layout), or the previous page (if flipbook has a page-based layout). This method accepts no arguments: iPaperAPI.goToPreviousPage(); goToNextPage Go to the next spread (if flipbook has a spread-based layout), or the next page (if flipbook has a page-based layout). This method accepts no arguments: iPaperAPI.goToNextPage(); Shop-related commands addItem Programmatically adds a custom item to the basket. This method accepts a single argument, consisting of properties of the shop item entity. Example use: iPaperAPI.addItem({ title: 'My product', description: 'Product description', productID: 'PROD-25B', price: '29.95', originPage: 6, quantity: 1 }); Advanced use If you are accustomed to the .trigger() regime (such as that implemented in JQuery), you can also use it as such: iPaperAPI.trigger('goToPage', 4); Refer to Event listeners and command triggers on the advanced usage page for a wholesome example.
https://docs.ipaper.io/javascript-api/commands
2019-09-15T07:52:16
CC-MAIN-2019-39
1568514570830.42
[]
docs.ipaper.io
IPersistStorage::HandsOffStorage A version of this page is also available for Windows Embedded CE 6.0 R3 4/8/2010 This method instructs the object to release all storage objects that have been passed to it by its container and to enter HandsOff mode, in which the object cannot do anything and only the close operation works. Syntax HRESULT HandsOfStorage(void); Parameters None. Return Value S_OK indicates that the object has successfully entered HandsOff mode. Remarks This method causes an object to release any storage objects that it is holding and to enter the HandsOff mode until a subsequent IPersistStorage::SaveCompleted call. In HandsOff mode, the object cannot do anything and the only operation that works is a close operation. A container application typically calls this method during a full save or low-memory full save operation to force the object to release all pointers to its current storage. In these scenarios, the HandsOffStorage call comes after a call to either the OleSave function or the IPersistStorage::Save method, putting the object in HandsOffAfterSave mode. Calling this method is necessary so the container application can delete the current file as part of a full save, or so it can call the IRootStorage::SwitchToFile method as part of a low-memory save. A container application also calls this method when an object is in Normal mode to put the object in HandsOffFromNormal mode. While the component object is in either HandsOffAfterSave or HandsOffFromNormal mode, most operations on the object will fail. Thus, the container should restore the object to Normal mode as soon as possible. The container application does this by calling the IPersistStorage::SaveCompleted method, which passes a storage pointer back to the component object for the new storage object. To determine whether the platform supports this interface, see Determining Supported COM APIs. Notes to Implementers This method must release all pointers to the current storage object, including pointers to any nested streams and storages. If the object contains nested objects, the container application must recursively call this method for any nested objects that are loaded or running. Requirements See Also Reference OleSave IPersistStorage::Save IPersistStorage::SaveCompleted IRootStorage::SwitchToFile
https://docs.microsoft.com/en-us/previous-versions/aa910219%28v%3Dmsdn.10%29
2019-09-15T08:37:56
CC-MAIN-2019-39
1568514570830.42
[]
docs.microsoft.com
Commerce v1 Products Variations Commerce doesn't have the concept of product variations. Instead, different variations of a product are simply unique product records with their own name, SKU, description, price, stock etc. To show different product options to your customers, you'd integrate using the Product List TV or Product Matrix TV. Depending on your integration path, the commerce.get_products or commerce.get_matrix snippets can then be used to let the user choose. Also see Product Catalog in the Getting Started section for more information about how to work with products, resources, and the different TVs we offer in Commerce.
https://docs.modmore.com/en/Commerce/v1/Products/Variations.html
2019-09-15T07:45:56
CC-MAIN-2019-39
1568514570830.42
[]
docs.modmore.com
Contents Security Operations Previous Topic Next Topic Run Block Request Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Run Block Request Blocks communication with observables associated with a security incident. Before you beginRole required: sn_si.analyst About this task Note: If no implementations are available, capability actions are not displayed in product menus. The Security Operations Integration - Block Request workflow can be triggered from an observable form, or from the Security Incident Observables related list on a security incident. This example shows a Block Request from a security incident. Procedure Navigate to a security incident. Select observables from the Related List tab. Click Block Request in the Actions on selected rows... drop-down menu. The dialog box appears. Choose the implementation. Click Block. The workflow execution audit is displayed in the work notes section. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-security-management/page/product/security-operations-common/task/run-block-request.html
2019-09-15T08:41:56
CC-MAIN-2019-39
1568514570830.42
[]
docs.servicenow.com
Deleting Integrate Endpoints To delete an Integrate endpoint: - Stop the endpoint(s) you want to delete. - If you need to delete multiple endpoints at the same time, select all of the endpoints that need to be removed. To achieve this, press the key while clicking on the names of the endpoints that need to be deleted under the Coder Navigator view. - Right click on any of the selected endpoints you want to delete. Click Delete from the appearing context menu. - Confirm your action.
https://docs.torocloud.com/integrate/developing/endpoints/deleting/
2019-09-15T07:54:21
CC-MAIN-2019-39
1568514570830.42
[array(['../../../placeholders/img/coder-studio/deleting-endpoints.png', 'Deleting an Integrate endpoint'], dtype=object) array(['../../../placeholders/img/coder/deleting-endpoints.png', 'Deleting an Integrate endpoint'], dtype=object) ]
docs.torocloud.com
. A rule which addresses belonging to this package must pass. This package is applied only to addresses that pass this filter Addresses in this package Range of addresses in this package Range of addresses to be excluded from this package A file URL holding specific addresses to be polled. Each line in the URL file can be one of: <IP><space>#<comments>,.
http://docs.opennms.org/opennms/branches/jira-NMS-12165-2/xsds/thresholding.xsd
2019-09-15T08:45:38
CC-MAIN-2019-39
1568514570830.42
[]
docs.opennms.org
Customer data management This article describes the customer data associated with the Workspace Environment Management (WEM) service. It provides information concerning the collection, storage, and retention of customer data involved. OverviewOverview WEM uses intelligent resource management and profile management technologies to deliver the best possible performance, desktop logon, and application response times for Citrix Virtual Apps and Desktops deployments. It is a software-only, driver-free solution. Data locationData location The data sources below are aggregated in a Microsoft Azure Cloud environment located in the United States. Data collectionData collection The WEM service involves three types of customer data: Logs collected from the WEM management console and from the WEM infrastructure services WEM service agent actions and policies defined by the administrator Statistics associated with end-user activity that are reported by WEM service agent Data control and storageData control and storage Log files - You can use the WEM management console (Manage tab) to control the log settings associated with the WEM service at any time. You can also enable or disable the log function. The “Citrix WEM Database Management Utility Debug Log.log” log file is located in the WEM infrastructure service installation directory. WEM service agent actions and policies - All the actions and policies you set up are saved and stored in the back-end Azure database and are accessible only to you through the WEM management console (Manage tab). Statistics on end-user activity - All statistics you monitor in the WEM management console (Manage tab) are saved and stored in the back-end Azure database and are accessible only to you through the WEM management console. Data retentionData retention The customer data associated with the WEM service are retained in an identifiable form during the entire service period.
https://docs.citrix.com/en-us/workspace-environment-management/service/reference/customer-data-management.html
2019-09-15T08:32:21
CC-MAIN-2019-39
1568514570830.42
[]
docs.citrix.com
Captures the VM by copying virtual hard disks of the VM and outputs a template that can be used to create similar VMs. Converts virtual machine disks from blob-based to managed disks. Virtual machine must be stop-deallocated before invoking this operation. The operation to create or update a virtual machine. Shuts down the virtual machine and releases the compute resources. You are not billed for the compute resources that this virtual machine uses. The operation to delete a virtual machine. Sets the state of the virtual machine to generalized. Retrieves information about the model view or the instance view of a virtual machine. Retrieves information about the run-time state of a virtual machine. Lists all of the virtual machines in the specified resource group. Use the nextLink property in the response to get the next page of virtual machines. Lists all of the virtual machines in the specified subscription. Use the nextLink property in the response to get the next page of virtual machines. Lists all available virtual machine sizes to which the specified virtual machine can be resized. Gets all the virtual machines under the specified subscription for the specified location. The operation to perform maintenance on a virtual machine. The operation to power off (stop) a virtual machine. The virtual machine can be restarted with the same provisioned resources. You are still charged for this virtual machine. The operation to redeploy a virtual machine. Reimages the virtual machine which has an ephemeral OS disk back to its initial state. The operation to restart a virtual machine. The operation to start a virtual machine. The operation to update a virtual machine. Thank you.
https://docs.microsoft.com/en-us/rest/api/compute/virtualmachines
2019-09-15T07:40:02
CC-MAIN-2019-39
1568514570830.42
[]
docs.microsoft.com
Properties configuration This chapter contains list of configuration properties for Onegini IDP . - IDP core properties - IDP extension properties - Header - Common - Authentication - Caching - Admin - Database - SMS - SAML - Remote Cache - Rest Services - Account link - Encryption - Persons API - Events API - Credentials API - Extension API - Statistics API - Configuration API - Rest API - Branding - Token - Statistics - Externally delivered code - BankId - Kerberos - Extension - Delegated User Management (DUM) module configuration-module-configuration) - Onegini Insights configuration - IDP remote cache values encryption - IDP properties encryption IDP core properties The following properties must be defined as environment properties in Onegini IDP Core docker. Extension wiring Properties encryption Java Key Store Logging Host/Reverse proxy IDP extension properties The following are the properties that must be defined as environment properties in Onegini IDP Extension docker. The properties are propagated from Onegini IDP Extension to Onegini IDP Core as described in Applications setup section. Header Common Authentication Caching Admin Database SMS SAML Remote Cache Rest Services Account link Encryption Persons API Events API Credentials API Extension API Statistics API Configuration API Rest API Branding Token Authentication and action token configuration properties Statistics Externally delivered code BankId Kerberos In order to allow users to authenticate over Kerberos protocol, the application requires a valid path to keytab file. The keytab file can be provided either by the use of persistable properties functionality or by mounting a volume from the Docker Host. In case the second solution (volume) would be picked, the PERSISTABLE_PROPERTY_KERBEROS_KEYTAB_PATH and PERSISTABLE_PROPERTY_KERBEROS_KEYTAB_CONTENTS properties are not required. Extension Delegated User Management (DUM) module configuration Onegini Insights configuration Configure the following properties to show Onegini Insights in the Admin console. IDP remote cache values encryption The Onegini IDP supports cached values encryption, which means that each value stored within a remote cache may be encrypted. Cache value encryption is done in a same way as properties encryption, more information on this topic can be found in Idp properties encryption. IDP properties encryption The Onegini IDP supports properties encryption, it means that each property passed to the application can be encrypted. The open source library Jasypt is used for this with a strong encryption algorithm, which is not present in the standard JRE security provider implementation. For this reason we use the BouncyCastle security provider implementation. Prerequisities As wrote above Jasypt is used for property encryption. Please download it and install, it only needs to be extracted. Unzip the library into a directory of your choice, e.g. the /opt directory. Encryption used by Onegini IDP requires additional library (Bouncy Castle) to be installed. Download the latest version of it and place the jar file into Property values encryption Property encryption is done by script provided with Jasypt library so please navigate to the directory where it is installed. Generate a master password used for encryption and: Do not forget to use generated master password as value for PROPERTIES_ENCRYPTION_KEY property.(). Below is an example of an encrypted property: IDP_BRANDING_NAME=ENC(IlHrIsl2cZl5WH0xQmSKC7SimY6yLD7LAWPtGV4DtfpDbmIZDY0aLt6+diHXwxcm) You can verify the encryption by running ./decrypt.sh providerClassName="org.bouncycastle.jce.provider.BouncyCastleProvider" algorithm="PBEWITHSHA256AND256BITAES-CB
https://docs.onegini.com/cim/idp/6.2.1/configuration/properties.html
2019-09-15T07:28:08
CC-MAIN-2019-39
1568514570830.42
[]
docs.onegini.com
Boot¶ hangs at “booting…”¶ On some serial connected devices the console settings are different, in which case you would not be able to start the installer. If you can reach the loader prompt, you could try to change some kernel parameters before booting. set hint.uart.0.flags=0x0 set hint.uart.1.flags=0x10 set comconsole_speed=115200 set comconsole_port=0x2F8 set console=comconsole boot Note To enter the loader prompt press 3 when the OPNsense boot menu is visible After installation, you could persist these settings in
https://docs.opnsense.org/troubleshooting/boot.html
2019-09-15T08:15:13
CC-MAIN-2019-39
1568514570830.42
[]
docs.opnsense.org
First go to console.symbyoz.io If you don't know your back-office login & password click to "Modify" and copy the URL to a new page. Connect to the back-office On a left menu, go to : Configuration >> Modules Click on green button Add new Add Module Title (name appear in Back-office) Add Module Name (database classe name) Click on green button Create Your module is now visible in module liste. Now we must add custom field inside Click to yellow button Edit Click on Module custom fields Here yo can see all fields inside your module. Id, Created at and Updated at are default fields from mongoDB document. Click to Add New button for add a new custom field Add a Field name Add a Field title Choose input type inside 25 customs fields type. Click to green button Create The database, only manages 9 types of fields. You can change the type of a field in the back-office at any time, as long as it remains in the same database category (Array, String, Date, Number, File, Pointer, Boolean, ...) Exemple : You can change Text to Email or Phone... But you cannot change Text to Array You can organize the display order of fields by Drag & Drop You can choose is a field is : Compulsory Visible only by super-admin in back-office Unique Visible in filter Visible in module form (create or edit) Visible in module listing For generate your custom module you must click on button Generate Your custom module is now ready to use You can acces directly from left menu
https://docs.symbyoz.io/tutorials/create_new_module_symbyoz
2019-09-15T07:54:50
CC-MAIN-2019-39
1568514570830.42
[]
docs.symbyoz.io
When you are making a multiplayer game, In addition to synchronizing the properties of networked GameObjects, you are likely to need to send, receive, and react to other pieces of information - such as when the match starts, when a player joins or leaves the match, or other information specific to your type of game, for example a notification to all players that a flag has been captured in a “capture-the-flag” style game. Within the Unity networkingThe Unity system that enables multiplayer gaming across a computer network. More info See in Glossary High-Level API there are three main ways to communicate this type of information. Remote actions allow you to call a method in your script across the network. You can make the server call methods on all clients or individual clients specifically. You can also make clients call methods on the server. Using remote actions, you can pass data as parameters to your methods in a very similar way to how you call methods in local (non-multiplayer) projects. Networking callbacks allow you to hook into built-in Unity events which occur during the course of the game, such as when players join or leave, when GameObjectsThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info See in Glossary are created or destroyed, or when a new SceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info See in Glossary is loaded. There are two types of networking callbacks that you can implement: Network manager callbacks, for callbacks relating to the network managerA Networking component that manages the network state of a Project. More info See in Glossary itself (such as when clients connect or disconnect) Network behaviour callbacks, for callbacks relating to individual networked GameObjects (such as when its Start function is called, or what this particular GameObject should do if a new player joins the game) Network messages are a “lower level” approach to sending messages (although they are still classed as part of the networking “High level API”). They allow you to send data directly between clients and the server using scripting. You can send basic types of data (int, string, etc) as well as most common Unity types (such as Vector3). Since you implement this yourself, these messages are not associated directly with any particular GameObjects or Unity events - it is up to you do decide their purpose and implement them! Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.3/Documentation/Manual/UNetActionsAndCommunication.html
2019-09-15T07:42:52
CC-MAIN-2019-39
1568514570830.42
[]
docs.unity3d.com
OWIN Integration Guide¶ Basic setup¶ To allow scoped instances to be resolved during an OWIN request, the following registration needs to be added to the IAppBuilder instance: // You'll need to include the following namespaces using Owin; using SimpleInjector; using SimpleInjector.Lifestyles; public void Configuration(IAppBuilder app) { app.Use(async (context, next) => { using (AsyncScopedLifestyle.BeginScope(container)) { await next(); } }); } Scoped instances need to be registered with the AsyncScopedLifestyle lifestyle: var container = new Container(); container.Options.DefaultScopedLifestyle = new AsyncScopedLifestyle(); container.Register<IUnitOfWork, MyUnitOfWork>(Lifestyle.Scoped); Extra features¶ Besides this basic integration, other tips and tricks can be applied to integrate Simple Injector with OWIN. Getting the current request’s IOwinContext¶ When working with OWIN you will occasionally find yourself wanting access to the current IOwinContext. Retrieving the current IOwinContext is easy as using the following code snippet: public interface IOwinContextAccessor { IOwinContext CurrentContext { get; } } public class CallContextOwinContextAccessor : IOwinContextAccessor { public static AsyncLocal<IOwinContext> OwinContext = new AsyncLocal<IOwinContext>(); public IOwinContext CurrentContext => OwinContext.Value; } The code snippet above defines an IOwinContextAccessor and an implementation. Consumers can depend on the IOwinContextAccessor and can call its CurrentContext property to get the request’s current IOwinContext. The following code snippet can be used to register this IOwinContextAccessor and its implementation: app.Use(async (context, next) => { CallContextOwinContextAccessor.OwinContext.Value = context; await next(); }); container.RegisterInstance<IOwinContextAccessor>(new CallContextOwinContextAccessor());
https://simpleinjector.readthedocs.io/en/latest/owinintegration.html
2019-09-15T08:01:11
CC-MAIN-2019-39
1568514570830.42
[]
simpleinjector.readthedocs.io
Before you get started, you may want to do is add a photo to your profile, so click Profile management. On the Personal page you can edit your details, such as your name, change your password, and view your group membership and capabilities. To add your photo, click the image to the left of your name and upload the desired photo.
http://docs.alfresco.com/process-services1.7/topics/personal_profile.html
2017-12-11T03:42:19
CC-MAIN-2017-51
1512948512121.15
[array(['https://docs.alfresco.com/sites/docs.alfresco.com/files/public/images/docs/defaultprocess_services1_7/3.png', 'images/3.png'], dtype=object) ]
docs.alfresco.com
Control Panel Overview In the Control Panel of ComingSoon, you will only see two tabs – Settings and Subscribers. Let’s start with the first part of the Settings tab by Enabling the module. Allow Admin This field allows you to enable or disable whether administrators will be able to access the store while it’s down for maintenance. This means they will see what the store currently looks like in its developing form, bypassing the Coming Soon page that visitors will see. Heading Title The heading title sets the headline of your Coming soon page (H1). This is the main message that newcomers will see. Sub Title The sub title appears below the heading title and serves as additional information about the status of your store. Notify Text This field can be used to invite visitors to leave their name and email address to learn more about your store, when it will launch or any other details you would like to share. Button Text This is the Call-to-Action button that users have to click in order to submit their name and email address. Countdown Date Set the date when your website will be launched and available. Time Intervals Set the measurement unit you want for the countdown. You can choose between conventional Days, Weeks, Months and Years, or more excentric Decades, Centuries, or Millennia. Background Image This will be the background image for the entire page. Logo Image Set the logo of your store so that visitors are sure where they are. Text Color This field lets you set the single text color for the entire ComingSoon page. Social Links Add the URLs to the social media profiles of your OpenCart store so that visitors can find you easily. The module integrates Facebook, Twitter, Google + and YouTube. The About page allows you to create a secondary page that will appear as a popup when opened, leaving the Coming Soon page as a background. Link Text The link to the About page can be customized with a message of your choice. About Text This is the main text that will serve as additional information about you, your store, the launch date, what customers can expect. You can use this for unique details about your story, what lead you to opening your store, or give some teasing information about future products. The final part of the Settings tab of ComingSoon lets you include your contact information such as email address your visitors can send you emails, phone number or physical address. Subscribers The email form in your ComingSoon page will collect their emails, showing them in the Subscribers tab. You will be able to view them by Subscriber Email, Subscriber Name, Date and have possible Action buttons. The Export to CSV button will allow you to export your subscribers list in a CSV file.
http://docs.isenselabs.com/comingsoon/control_panel
2017-12-11T03:58:04
CC-MAIN-2017-51
1512948512121.15
[array(['/doc/comingsoon/img/settings_1.png', 'settings'], dtype=object) array(['/doc/comingsoon/img/settings_2.png', 'settings2'], dtype=object) array(['/doc/comingsoon/img/settings_3.png', 'settings3'], dtype=object) array(['/doc/comingsoon/img/settings_4.png', 'settings4'], dtype=object) array(['/doc/comingsoon/img/subscribers.png', 'subscribers'], dtype=object)]
docs.isenselabs.com
This workflow covers the following steps: Getting the Service Creating an Update There are four ways of provisioning devices in Bosch IoT Rollouts. They can either be pre-provisioned using the Management UI or the Management API, or a device (-management) itself can register with Bosch IoT Rollouts via Direct Device Integration API or Device Management Federation API. Once a device is registered, it can provide additional information in form of key-value attributes (a.k.a. Controller Attributes). First, the device is provisioned via the UI before it registers itself providing further information. 1: Pre-provision a device Bosch IoT Rollouts manages devices (a.k.a. Targets) in the Deployment view. A user provisions a new target by clicking on the of the Targets table. After creation, and until the target connects for the first time, the target is in state UNKNOWN, indicated by the icon. Now, that the device is pre-provisioned, it can register itself via DDI and provide additional information. The required authentication is provided by the device’s Security Token. This can be enabled in the System Configuration view under Authentication Configuration “Allow targets to authenticate directly …”. Enable it and then save it by clicking on the icon at the bottom of the page. The device can now authenticate itself with its security token and update its status and attributes. First, register the device by polling the device’s DDI resource. The respective replacement tokens are explained here. $ curl 'https://<HOST>/<TENANT_ID>/controller/v1/device01' -i -H 'Accept: application/hal+json' -H 'Authorization: TargetToken <TARGET_TOKEN>' { "config": { "polling": { "sleep": "00:05:00" } }, "_links": { "configData": { "href": "https://<HOST>/<TENANT_ID>/controller/v1/device01/configData" } } } You may notice, that the target state changed from UNKNOWN to REGISTERED . Finally, let’s add some attributes to the device by following the link to configData provided in response to our last call: $ curl 'https://<HOST>/<TENANT_ID>/controller/v1/device01/configData' -i -X PUT -H 'Authorization: TargetToken <TARGET_TOKEN>' -H 'Content-Type: application/json;charset=UTF-8' -d '{ "mode" : "merge", "data" : { "VIN" : "JH4TB2H26CC000000", "hwRevision" : "2" }, "status" : { "result" : { "finished" : "success" }, "execution" : "closed", "details" : [ ] } }' You can verify, that the attributes were set correctly, by checking the Attributes tab of the target details table: Custom metadata in the form of key-value pairs can be added to a device. You can edit a device’s metadata by clicking the icon in the target details table. Fill out the key and value field and click the Save Botton to add new data. More key-value pairs can be added by clicking the icon. The keys are displayed under the Metadata tab in the target details table. Click them to see their value. Similar to the Management UI, a device can be pre-provisioned using a single call to the Management API. The respective replacement tokens are explained here. $ curl 'https://<HOST>/rest/v1/targets' -u "<TENANT_ID>\<USERNAME>:<PASSWORD>" -i -X POST -H 'Content-Type: application/json;charset=UTF-8' -d '[ { "controllerId" : "device02", "name" : "Device 02", "description" : "My first Device created via Management API." } ]' [ { "createdBy": "CLD:83717175-0650-400a-b6f2-9a4a398fc07a", "createdAt": 1530533483880, "lastModifiedBy": "CLD:83717175-0650-400a-b6f2-9a4a398fc07a", "lastModifiedAt": 1530533483880, "name": "Device 02", "description": "My first Device created via Management API.", "controllerId": "device02", "updateStatus": "unknown", "securityToken": "51b8e7fb97fe10bd49a57ca39f19677d", "requestAttributes": true, "_links": { "self": { "href": "https://<HOST>/rest/v1/targets/device02" } } } ] As the device is now pre-provisioned, you can take the same steps as for the Management UI to register the device and update its attributes. A device can only connect and register with Bosch IoT Rollouts i.e., without having been previously provisioned via Management UI or Management API, by providing a gateway token to authenticate. This can be configured and retrieved in the System Configuration view under Authentication Configuration “Allow a gateway to authenticate …”. Enable it and then save it by clicking on the icon. The generated token can then be used in the authorization header of the request. The respective replacement tokens are explained here. A device registered via DDI-API is in state REGISTERED, indicated by the icon in the “Deployment” view of the Management UI. $ curl 'https://<HOST>/<TENANT_ID>/controller/v1/device03' -i -H 'Accept: application/hal+json' -H 'Authorization: GatewayToken <GATEWAY_TOKEN>' { "config": { "polling": { "sleep": "00:05:00" } }, "_links": { "configData": { "href": "https://<HOST>/<TENANT_ID>/controller/v1/device03/configData" } } } Finally, let’s add some attributes to the device by following the link to configData provided in response to our last call: $ curl 'https://<HOST>/<TENANT_ID>/controller/v1/device03/configData' -i -X PUT -H 'Authorization: GatewayToken <GATEWAY_TOKEN>' -H 'Content-Type: application/json;charset=UTF-8' -d '{ "mode" : "merge", "data" : { "VIN" : "JH4TB2H26CC000001", "hwRevision" : "1" }, "status" : { "result" : { "finished" : "success" }, "execution" : "closed", "details" : [ ] } }' You can verify, that the attributes were set correctly, by checking the Attributes tab of the target details table: Getting the Service [Top ] Creating an Update
https://docs.bosch-iot-suite.com/rollouts/introduction/gettingstarted/provisioningadevice.html
2021-04-11T01:54:10
CC-MAIN-2021-17
1618038060603.10
[]
docs.bosch-iot-suite.com
Clusters¶ When you open any cluster from the /dashboard menu you will arrive at /clusters management. Here you will see more information about the selected cluster and will get access to the cluster management menu. - Quickly navigate through active clusters. - Information and log of the selected cluster. - Management menu.
https://docs.cast.ai/console-overview/clusters/
2021-04-11T00:14:06
CC-MAIN-2021-17
1618038060603.10
[array(['../images/clusters.png', None], dtype=object)]
docs.cast.ai
Filter websites based on the selected Magento 2 instance during item sync When you sync items to NetSuite, the following flows now filter the websites based on the selected Magento 2 instance: - NetSuite Item to Magento Product Add/Update - NetSuite Kit Item to Magento Bundle Product Add/Update - NetSuite Matrix Item to Magento Configurable Product Add/Update Sync NetSuite price levels to product price without any error When you run the NetSuite Item to Magento Product Add/Update flow, the NetSuite price level for syncing product price setting (Settings > Product) now syncs the NetSuite price levels to Magento 2 product price without any errors. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360057656311-Magento-2-NetSuite-release-notes-v1-29-4-February-2021
2021-04-11T00:22:38
CC-MAIN-2021-17
1618038060603.10
[]
docs.celigo.com
You're reading the documentation for an older, but still supported, version of ROS 2. For information on the latest version, please have a look at Foxy. Quality guide: ensuring code quality¶ Table of Contents This page gives guidance about how to improve the software quality of ROS 2 packages, focusing on more specific areas than the Quality Practices section of the Developer Guide. The sections below intend to address ROS 2 core, application and ecosystem packages and the core client libraries, C++ and Python. The solutions presented are motivated by design and implementation considerations to improve quality attributes like “Reliability”, “Security”, “Maintainability”, “Determinism”, etc. which relate to non-functionalitizer). You do not use Libc/libstdc++ static linking (in case of ThreadSanitizer). You do not build non-position-independent executables (in case of ThreadSanitizer).itizer). Implementation: Compile and link the production code with clang using the option -fsanitize=thread(this instruments the production code). In case different production code shall be executed during analysis consider conditional compilation e.g. ThreadSanitizers _has_feature(thread_sanitizer). In case some code shall not be instrumented consider ThreadSanitizers _/*attribute*/_((no_sanitize(“thread”))). In case some files shall not be instrumented consider file or function-level exclusion ThreadSanitizers blacklisting, more specific: ThreadSanitizers Sanitizer Special Case List or with ThreadSanitizers no_sanitize(“thread”) and use the option --fsanitize-blacklist. Resulting context: Higher chance to find data races and deadlocks in production code before deploying it. Analysis result may lack reliability, tool in beta phase stage (in case of ThreadSanitizer). Overhead due to production code instrumentation (maintenance of separate branches for instrumented/not instrumented production code, etc.). Instrumented code needs more memory per thread (in case of ThreadSanitizer). Instrumented code maps a lot virtual address space (in case of ThreadSanitizer).
https://docs.ros.org/en/dashing/Contributing/Quality-Guide.html
2021-04-11T00:39:16
CC-MAIN-2021-17
1618038060603.10
[]
docs.ros.org
class MCP4921¶ class MCP4921(spidrv, cs, clk = 400000) Creates an instance of the MCP4921 class. Arguments: - spidrv – SPI Bus used ‘(SPI0, …)’ - cs – Chip select pin - clk – Clock speed, default 400 kHz MCP4921.set_value¶ set_value(v, gain = 1, buff = False) Sends the 12-bit value v to the DAC. The gain of the output amplifier can be set through gain parameter, valid values are 1 and 2. Analog Output Voltage = ( v / 4096) * Vref * gain If buff is True, the device buffers the Voltage Reference input increasing the input impedance but limiting the input range and frequency response. If buff is False the input range is wider (from 0V to Vdd) and the typical input impedence is 165 kOhm with 7pF. If in Shutdown mode, the device is changed to Active mode. In this case the output settling time increases to 10 us. MCP4921.shutdown¶ shutdown() Shutdown the device. During Shutdown mode, most of the internal circuits are turned off for power savings and there will be no analog output.
https://docs.zerynth.com/latest/reference/libs/microchip/mcp4921/docs/mcp4921/
2021-04-11T01:53:27
CC-MAIN-2021-17
1618038060603.10
[]
docs.zerynth.com
Docker Desktop WSL 2 backend Estimated reading time: 7 minutes.. Prerequisites Before you install the Docker Desktop WSL 2 backend, you must complete the following steps: - Install Windows 10, version 1903 or higher. - Enable WSL 2 feature on Windows. For detailed instructions, refer to the Microsoft documentation. - Download and install the Linux kernel update package. Best practices To get the best out of the file system performance when bind-mounting files, we recommend storing source code and other data that is bind-mounted into Linux containers (i.e., with docker run -v <host-path>:<container-path>) in the Linux file system, rather than the Windows file system. You can also refer to the recommendation from Microsoft. - Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem. For example, some web development workflows rely on inotify events for automatic reloading when files have changed. - Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users(where /mnt/cis mounted from Windows). - Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image>where ~is expanded by the Linux shell to - If you have concerns about the size of the docker-desktop-data VHDX, or need to change it, take a look at the WSL tooling built into Windows. - If you have concerns about CPU or memory usage, you can configure limits on the memory, CPU, Swap size allocated to the WSL 2 utility VM. - To avoid any potential conflicts with using WSL 2 on Docker Desktop, you must uninstall any previous versions of Docker Engine and CLI installed directly through Linux distributions before installing Docker Desktop. Download Download Docker Desktop Stable 2.3.0.2 or a later release. Install Ensure you have completed the steps described in the Prerequisites section before installing the Docker Desktop Stable 2.3.0.2 release. - Follow the usual installation instructions to install Docker Desktop. If you are running a supported system, Docker Desktop prompts you to enable WSL 2 during installation. Read the information displayed on the screen and enable WSL 2 to continue. - Start Docker Desktop from the Windows Start menu. From the Docker menu, select Settings > General. Select the Use WSL 2 based engine check box. If you have installed Docker Desktop on a system that supports WSL 2, this option will be enabled by default. - Click Apply & Restart. Ensure the distribution runs in WSL 2 mode. WSL can run distributions in both v1 or v2 mode. To check the WSL mode, run: wsl.exe -l -v To upgrade your existing Linux distro to v2, run: wsl.exe --set-version (distro name) 2 To set v2 as the default version for future installations, run: wsl.exe --set-default-version 2 When Docker Desktop restarts, go to Settings > Resources > WSL Integration. The Docker-WSL integration will be enabled on your default WSL distribution. To change your default WSL distro, run wsl --set-default <distro name>. For example, to set Ubuntu as your default WSL distro, run wsl --set-default ubuntu. Optionally, select any additional distributions you would like to enable the Docker-WSL integration on. Note The Docker-WSL integration components running in your distro depend on glibc. This can cause issues when running musl-based distros such as Alpine Linux. Alpine users can use the alpine-pkg-glibc package to deploy glibc alongside musl to run the integration. - Click Apply & Restart. Develop with Docker and WSL 2 The following section describes how to start developing your applications using Docker and WSL 2. We recommend that you have your code in your default Linux distribution for the best development experience using Docker and WSL 2. After you have enabled WSL 2 on Docker Desktop, you can start working with your code inside the Linux distro and ideally with your IDE still in Windows. This workflow can be pretty straightforward if you are using VSCode. - Open VSCode and install the Remote - WSL extension. This extension allows you to work with a remote server in the Linux distro and your IDE client still on Windows. Now, you can start working in VSCode remotely. To do this, open your terminal and type: wsl code . This opens a new VSCode connected remotely to your default Linux distro which you can check in the bottom. GPU support Starting with Docker Desktop 3.1.0, Docker Desktop supports WSL 2 GPU Paravirtualization (GPU-PV) on NVIDIA GPUs. To enable WSL 2 GPU Paravirtualization, you need: - A machine with an NVIDIA GPU - The latest Windows Insider version from the Dev Preview ring - Beta drivers from NVIDIA supporting WSL 2 GPU Paravirtualization - Update WSL 2 Linux kernel to the latest version using wsl --updatefrom an elevated command prompt - Make sure the WSL 2 backend is enabled in Docker Desktop To validate that everything works as expected, run the following command to run a short benchmark on your GPU: ❯ docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark MapSMtoCores for SM 7.5 is undefined. Default to use 64 Cores/SM GPU Device 0: "GeForce RTX 2060 with Max-Q Design" with compute capability 7.5 > Compute 7.5 CUDA device: [GeForce RTX 2060 with Max-Q Design] 30720 bodies, total time for 10 iterations: 69.280 ms = 136.219 billion interactions per second = 2724.379 single-precision GFLOP/s at 20 flops per interaction Feedback Your feedback is very important to us. Please let us know your feedback by creating an issue in the Docker Desktop for Windows GitHub repository and adding the WSL 2 label.WSL, WSL 2 Tech Preview, Windows Subsystem for Linux, WSL 2 backend Docker
https://docs-stage.docker.com/docker-for-windows/wsl/
2021-04-11T00:29:29
CC-MAIN-2021-17
1618038060603.10
[]
docs-stage.docker.com
If you need an exact copy of an overlay project with all the embedding plugins, you can use the cloning function of the Overlay Editor. Cloning an overlay means that you can create a copy of one overlay project you have created and apply it to another constellation including all original overlay's plugins. It allows you to quickly create one overlay for multiple constellations. Imagine you want to demonstrate your store with its 5 branches, then you can easily clone one overlay project to all 5 stores in your virtual tour list. By doing so the overlay can be accessed from all of your stores` virtual tours and if you need to change the side menu or any info of the stores, you just edit once and all tours` overlay gets updated. How To Clone An Overlay? 1. From the Constellation manager, locate the tour already with Overlay that you would like to clone and click on the tools button. 2. In the Tools screen, go to Overlay Manager. 3. In the Overlay Manager screen, click clone. 4. Prompt window will open asking you to select a constellation in which you want to clone the overlay then click clone to verify the process. 5. In short the original overlay project with all embed`s plugins has been applied to the target tour. 6. Now you can go back to the constellation page to see the result of the cloning. Locate the tour with the cloned overlay and open it on the overlay editor as shown in the following example you see the overlay project from Chocolate Factory has been cloned to Chocolato Trois-Rivières. How to clone an Overlay to a tour in a different account? 1. In the Overlay manager screen, click on the down arrow of the clone button then press clone to account. 2. The prompt window will open asking you to enter the target GoThru account and the CID of the target tour. 3. You will get the notification when the cloning process is finished. 4. Now the cloning process is complete, go to the target account to see the result of the cloning.
https://docs.gothru.co/how-to-clone-an-overlay-to-other-tours-using-gothru-overlay-editor/
2021-04-11T00:49:36
CC-MAIN-2021-17
1618038060603.10
[array(['https://docs.gothru.co/content/images/2021/03/image.png', None], dtype=object) ]
docs.gothru.co
You're reading the documentation for a development version. For the latest released version, please have a look at Foxy. Using Python Packages with ROS 2¶ Goal: Explain how to interoperate with other Python packages from the ROS2 ecosystem. Contents Note A cautionary note, if you intended to use pre-packaged binaries (either deb files, or the “fat” binary distributions), the Python interpreter must match what was used to build the original binaries. If you intend to use something like virtualenv or pipenv, make sure to use the system interpreter. If you use something like conda, it is very likely that the interpreter will not match the system interpreter and will be incompatible with ROS2 binaries. Installing via rosdep¶ The fastest way to include third-party python packages is to use their corresponding rosdep keys, if available. rosdep keys can be checked via: These rosdep keys can be added to your package.xml file, which indicates to the build system that your package (and dependent packages) depend on those keys. In a new workspace, you can also quickly install all rosdep keys with: rosdep install -yr ./path/to/your/workspace If there aren’t currently rosdep keys for the package that you are interested in, it is possible to add them by following the rosdep key contribution guide. To learn more about the rosdep tool and how it works, consult the rosdep documentation. Installing via a package manager¶ If you don’t want to make a rosdep key, but the package is available in your system package manager (eg apt), you can install and use the package that way: sudo apt install python3-serial If the package is available on The Python Package Index (PyPi) and you want to install globally on your system: python3 -m pip install -U pyserial If the package is available on PyPi and you want to install locally to your user: python3 -m pip install -U --user pyserial Installing via a virtual environment¶ First, create a Colcon workspace: mkdir -p ~/colcon_venv/src cd ~/colcon_venv/ Then setup your virtual environment: # Make a virtual env and activate it virtualenv -p python3 ./venv source ./venv/bin/activate # Make sure that colcon doesn’t try to build the venv touch ./venv/COLCON_IGNORE Next, install the Python packages that you want in your virtual environment: python3 -m pip install gtsam pyserial… etc Now you can build your workspace and run your python node that depends on packages installed in your virtual environment. # Source Foxy and build source /opt/ros/foxy/setup.bash colcon build Note If you want release your package on Bloom, you should to add the packages you require to rosdep, see the rosdep key contribution guide.
https://docs.ros.org/en/rolling/Guides/Using-Python-Packages.html
2021-04-11T01:34:19
CC-MAIN-2021-17
1618038060603.10
[]
docs.ros.org
Cloud credentials In order to access your cloud, Juju will need to know how to authenticate itself. We use the term credentials to describe the tokens or keys or secrets. Juju can import your cloud credentials in one of three ways: - Accepting credentials provided interactively by the user on the command line - Scanning for existing credentials (e.g. environment variables, "rc" files) - Reading a user-provided YAML-formatted file Each of these methods are explained below, but if you are still having difficulty you can get extra help by selecting your cloud from among this list: Amazon AWS | Microsoft Azure | Google GCE | Joyent | MAAS | OpenStack | VMware vSphere | Oracle Compute | Rackspace Note: LXD deployments are a special case. Accessed locally, they do not require credentials. Accessed remotely, they need a certificate credential. See Using LXD as a cloud for further details. Adding credentials via the command line You can add credentials by running the command: juju add-credential <cloud> Juju will then ask for the information it needs. This may vary according to the cloud you are using, but will typically look something like this: Enter credential name: carol Using auth-type "access-key". Enter access-key: ******* Enter secret-key: ******* Credentials added for cloud aws. Once you have supplied all the information, the credentials will be added. At present, you will need to manually set one to be the default, if you have more than one for a cloud: juju set-default-credential <cloud> <credential> Setting a default credential means this will be used by the bootstrap command when creating a controller, without having to specify it with the --credential option in the juju add-model command. Scanning existing credentials Some cloud providers (e.g. AWS, OpenStack) have command line tools which rely on environment variables being used to store credentials. If these are in use on your system already, or you choose to define them ([there is extra info here][env]), Juju can import them. For example, AWS uses the following environment variables (among others): AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY If these are already set in your shell (you can echo $AWS_ACCESS_KEY_ID to test) they can be used by Juju. To scan your system for credentials Juju can use, run the command: juju autoload-credentials This will will ask you whether to store each set of credentials it finds. Note that this is a 'snapshot' of those stored values - Juju will not notice if they change in future. Adding credentials from a YAML file You can also specify a YAML format file for the credentials. This file would be similar to, but shorter than this extensive sample, which we will call mycreds.yaml: credentials: aws: default-credential: peter default-region: us-west-2 peter: auth-type: access-key access-key: AKIAIH7SUFMBP455BSQ secret-key: HEg5Y1DuGabiLt72LyCLkKnOw+NZkgszh3qIZbWv paul: auth-type: access-key access-key: KAZHUKJHE33P455BSQB secret-key: WXg6S5Y1DvwuGt72LwzLKnItt+GRwlkn668sXHqq homemaas: peter: auth-type: oauth1 maas-oauth: 5weWAsjhe9lnaLKHERNSlke320ah9naldIHnrelks homestack: default-region: region-a peter: auth-type: userpass password: UberPassK3yz tenant-name: appserver username: peter google: peter: auth-type: jsonfile file: ~/.config/gcloud/application_default_credentials.json azure: peter: auth-type: service-principal-secret application-id: A source file like the above can be added to Juju's list of credentials with the command: juju add-credential aws -f mycreds.yaml This sample includes all of the default cloud options plus a couple of special cloud options, MAAS and an OpenStack cloud called homestack in the sample. See Clouds. Managing credentials There are several management tasks that can be done related to credentials. Listing credentials You can check what credentials are stored by Juju by running the command: juju credentials ...which will return a list of the known credentials. For example: Cloud Credentials aws bob*, carol google wayne The asterisk '*' denotes the default credential, which will be used for the named cloud unless another is specified. For YAML output that includes detailed credential information, including secrets like access keys and passwords: juju credentials --format yaml --show-secrets The YAML output will be similar to our 'mycreds.yaml' sample above. Warning: It is not possible to update the credentials if the initial credential name is unknown. This restriction will be removed in an upcoming release of Juju. Updating remote credentials using a different Juju user If you are unable to ascertain the original Juju username then you will need to use a different one. This implies adding a new credential name, copying over any authentication material into the old credential name, and finally updating the credentials. Below we demonstrate this for the Azure cloud: Add a new temporary credential name (like 'new-credential-name') and gather all credential sets (new and old): juju add-credential azure juju credentials azure --format yaml --show-secrets > azure-creds.yaml Copy the values of application-id and application-password from the new set to the old set. Then replace the local credentials and upload them to the controller: juju add-credential azure -f azure-creds.yaml --replace juju update-credential azure old-credential-name To be clear, the file azure-creds.yaml (used with add-credential) should look similar to: Credentials: azure: new-credential-name: auth-type: service-principal-secret application-id: foo1 application-password: foo2 subscription-id: bar old-credential-name: auth-type: service-principal-secret application-id: foo1 application-password: foo2 subscription-id: bar Removing local credentials If a local credential (i.e. not cached on a controller) is no longer required, it can be removed: juju remove-credential aws bob
https://docs.jujucharms.com/2.1/en/credentials
2018-06-17T21:46:16
CC-MAIN-2018-26
1529267859817.15
[]
docs.jujucharms.com
You use the catalog service to perform tasks related to requesting a machine. The catalog service is comprised APIs for the consumer, service providers, and service administrators. It is designed to be used by consumers and providers of the service catalog. For example, a consumer would request a catalog item such as a machine. The service provider would fulfill the request. The catalog service includes Hypermedia as the Engine of Application State (HATEOAS) links. The links function as templates that you can use to complete common tasks supported by the API. For example, if you submit a template request for a given context, such as: catalog-service/api/consumer/entitledCatalogItems/dc808d12-3786-4f7c-b5a1-d5f997c8ad66/requests/template. You use the returned template, either as-is or modified, to create a request that you POST or PUT to the target API, such as: catalog-service/api/consumer/entitledCatalogItems/dc808d12-3786-4f7c-b5a1-d5f997c8ad66/requests.
https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.programming.doc/GUID-867F8881-E381-453D-9AC6-19A91A2F5FB8.html
2018-06-17T22:29:59
CC-MAIN-2018-26
1529267859817.15
[]
docs.vmware.com
Database Replication and Clustering Similar to the maintenance procedure, schema changes to an underlying dataserver may need to be performed on dataservers that are not part of an active dataservice. Although many inline schema changes, such as the addition, removal or modification of an existing table definition will be correctly replicated to slaves, other operations, such as creating new indexes, or migrating table data between table definitions, is best performed individually on each dataserver while it has been temporarily taken out of the dataservice. If you are attempting an Online schema change and running in a MSMM environment, then you should follow the steps in Section 3.2. The following method assumes a schema update on the entire dataservice by modifying the schema on the slaves first. The schema shows three datasources being updated in sequence, slaves first, then the master. With any schema change to a database, the database performance should be monitored to ensure that the change is not affecting the overall dataservice performance.
http://docs.continuent.com/tungsten-clustering-6.0/operations-schemachanges.html
2018-06-17T21:51:01
CC-MAIN-2018-26
1529267859817.15
[]
docs.continuent.com
Mule Maven Plugin 3.1.0 Release Notes Mule Maven Plugin 3.1.0 is the first iteration after our major release of Mule Maven Plugin 3. We have manage to fix a small number of issues that were affecting the product and also added several key features that were intentionally left out of Mule Maven Plugin 3.0.0. Hardware and Software Requirements Hardware and Software Requirements Microsoft Windows 8 \+ Apple Mac OS X 10.10 \+ Linux (tested on Ubuntu 15) Java 8 Maven 3.3.3, 3.3.9, 3.5.0 Features and Functionality The first new feature we added is Domain Support. You can now package and deploy Mule Domains in the same way you did with the rest of the artifacts. There is a new artifact type, the mule-domain-bundle. This new artifact is a ZIP file (not a JAR file as all the other artifacts). A mule domain bundle is a project with a pom.xml file which list dependencies to a Mule Domain artifact and at least one Mule Application. When packaged this artifact contains the JAR files of the Domain and all related Mule Applications. It will also run several validation cross dependencies to ensure not only that the Applications and the Domains are properly related but also to ensure no dependency conflict exists. We’ve create the _muleExclude file. This file contains a list of paths and regular expressions that instruct the Mule Maven Plugin to ignore certain files when attaching the source code to the generated artifact. Finally we’ve added a number of extra validations to ensure that we stop any problem with the artifact beforehand, thus avoiding time wasting in deployments that will fail. Fixed Issues MMP-132 - Validate Packaging in Jenkins when "Process plugins during pom parsing" is on. MMP-293 - Address dependency policy violations. MMP-324 - Redirect project build directory does not affect all mojos in lifecyle. MMP-334 - Mule-artifact.json auto gen should not override unknown values. MMP-327 - If projectBuildDirectory belongs to project base folder, a recursive copy is done. MMP-199 - Cloudhub domain availability verifications occasionally fail. Enhancement Request MMP-320 - Implement deployment timeout when deploying through Agent. MMP-322 - Implement deployment timeout to ARM deployment. MMP-321 - Implement deployment timeout to CloudHub deployment. MMP-288 - Create _muleExclude. MMP-249 - Implement Domain Deployment Standalone. MMP-318 - Implement Domain Deployment Agent. MMP-299 - replace usage of json for gson. MMP-165 - Create blacklist. MMP-271 - Update DefaultValuesMuleArtifactJsonGenerator to remove reflection. MMP-273 - Add Mojo property to override output directory. MMP-280 - Create strictCheck systemproperty. MMP-274 - When Maven version is less than v3.3.3, build should fail.. MMP-298 - Ensure all mule instances are killed after deployment integration test. MMP-286 - Verify project groupid when deploying to Exchange. MMP-282 - Move to Plexus 3.1.0. MMP-305 - Mule 4 Examples should have attachMuleSources as true by default. MMP-308 - Deprecate libraries/run script feature from mule-maven-plugin. MMP-283 - Remove Jersey client dependencies with vulnerabilities. MMP-278 - Validate artifact deployment. MMP-312 - Deprecate timeout property. MMP-314 - The MuleVersion element should validate the minimum runtime version for the application being deployed in all deployment targets. MMP-238 - Resolve mule plugins compatibility with semver4j. MMP-289 - Some tests fail in Windows due to test set up errors. MMP-275 - Project dependencies is not really necessary. MMP-90 - Change user-agent of jersey client. MMP-309 - Validate plugin compatibility between apps and domains. MMP-330 - Update default values on CH deployer since new changes on last release. MMP-310 - Remove httpClient exclusion.
https://docs.mulesoft.com/release-notes/mule-maven-plugin-3.1.0-release-notes
2018-06-17T22:17:54
CC-MAIN-2018-26
1529267859817.15
[]
docs.mulesoft.com
Add-ons Kumu offers a number of free and paid add-ons that give your project access to powerful features beyond Kumu's base functionality. All projects (public and private) have access to the same set of add-ons. Add-ons are activated on a project-by-project basis. Click on any of the add-ons below to access instructions on how to install and modify the settings. Free add-ons - Classic SNA Metrics - Community Detection - Google Sheets (public) - Disqus Paid add-ons - Google Sheets (private)
https://docs.kumu.io/guides/add-ons.html
2018-06-17T22:09:05
CC-MAIN-2018-26
1529267859817.15
[]
docs.kumu.io
Working with images In Kumu, there are a number of different places where you can use images to enrich your project. This guide covers: Add images to elements If you are building a map by hand, you can also easily upload an image file from your computer directly to Kumu. Just click an item (element, connection, or loop) on your map, click the camera icon in the upper right of the profile, and click "select a file" to upload your image. You can upload images to descriptions and the map overview as well. To do this, click to edit any text area in the side panel, and look for "select a file" below the text area. Note: when you're uploading images, only JPEG, PNG, and GIF files smaller than 5mb are supported at this time. Instead of uploading a file, you can also add a URL to an image hosted publicly on the web. This is particularly useful when you're importing a spreadsheet into Kumu—just make sure to add an "Image" column in the the "Elements" sheet, then in the "Image" column, add the public URL for the image that you want to add to each element. Troubleshooting image URLs Are you using image URLs and not seeing images on the map? Here are a few steps you can take to troubleshoot the problem: - Make sure your link leads directly to the image, rather than a webpage with the image on it - Make sure your image URL is using a secure connection—that is, the link starts with httpsinstead of just http - Disable image proxy: click the menu icon in the upper left of the map editor, then click Admin and click disable it. Using decorations to add images With decorations, you can create rules that add the same image to multiple elements. To do this, open the element decoration builder, select which elements the rule will apply to, and check the box next to "Add image". You'll be prompted to add an image URL or upload an image from your computer, and your image will be added to the selected elements. Add a background image To add a background image to a map, you can use a snippet of code in the Advanced Editor. The following instructions will help you add an image of a world map, but they can be adapted to add any background image. - Create an element and change its label to background. - Copy/paste the following code into your Advanced Editor: #background { image-url: url(); layer: background; shape: square; size: 5000; color: transparent; image-size: contain; image-resolution: original; label-visibility: none; } - Click SAVE at the bottom of the Advanced Editor to save your changes. You can replace the image-url in that code with a link to any image online. To get an image URL from any image you see online, you can right-click the image and select "Copy Image Address". When you're pasting your new image url into the Advanced Editor, make sure to put it inside the url( ) parentheses. Some images won't be displayed in Kumu, because they are using an insecure connection (the link starts with http instead of https), or because they are traveling through a proxy server. If your image isn't displaying in Kumu, you can save the file to your computer and follow the steps below to upload the image directly to your Kumu project. You can also use the Basic Editor to upload an image from your computer to your Kumu project—this will override the image-url in your code. Follow these steps: - Complete the initial three steps above to create your background element, add your Advanced Editor code, and save your changes. - Click the Settings icon to open the Basic Editor. - Click More Options - Select Decorate elements - In the element decoration builder, set the dropdown at the top to "Decorate custom selection" - Set the second row of dropdown menus to Label is background(assuming your background element's label is background) - Check the box next to "Add image", then click "upload image" - Upload your image, and click Done at the bottom of the decoration builder - Click SAVE at the bottom of the Basic Editor If you need to adjust the position of the image element in the map, you'll need to remove the layer: background; line. Click and drag the element to adjust its position, and then add back that line of code when you're done. Notes: - "background" is the label of the element that will contain the background image in this example, but the label can be anything you want. Just make sure you update the #backgroundselector in your code to match your new label. image-resolutioncan have values of auto, original, or any number. The number you include (e.g. 1000) will adapt the resolution for an image of that width (1000px). As always, if you have any questions on how this works, email us at [email protected] for help!
https://docs.kumu.io/guides/images.html
2018-06-17T21:58:54
CC-MAIN-2018-26
1529267859817.15
[array(['../images/upload-image.gif', 'Gif showing how to upload an image to Kumu'], dtype=object)]
docs.kumu.io
Struct serde_json:: Error[−][src] pub struct Error { /* fields omitted */ } This type represents all possible errors that can occur when serializing or deserializing JSON data. Methods One-based line number at which the error was detected. Characters in the first line of the input (before the first newline character) are in line 1. One-based column number at which the error was detected. The first character in the input and any characters immediately following a newline character are in column 1. Note that errors may occur in column 0, for example if a read from an IO stream fails immediately following a previously read newline character. Categorizes the cause of this error. Category::Io- failure to read or write bytes on an IO stream Category::Syntax- input that is not syntactically valid JSON Category::Data- input data that is semantically incorrect Category::Eof- unexpected end of the input data Returns true if this error was caused by a failure to read or write bytes on an IO stream. Returns true if this error was caused by input that was not syntactically valid JSON. Returns true if this error was caused by input data that was semantically incorrect. For example, JSON containing a number is semantically incorrect when the type being deserialized into holds a String. Returns true if this error was caused by prematurely reaching the end of the input data. Callers that process streaming input may be interested in retrying the deserialization once more data is available. Trait Implementations Convert a serde_json::Error into an io::Error. JSON syntax and data errors are turned into InvalidData IO errors. EOF errors are turned into UnexpectedEof IO errors. use std::io; enum MyError { Io(io::Error), Json(serde_json::Error), } impl From<serde_json::Error> for MyError { fn from(err: serde_json::Error) -> MyError { use serde_json::error::Category; match err.classify() { Category::Io => { MyError::Io(err.into()) } Category::Syntax | Category::Data | Category::Eof => { MyError::Json(err) } } } } Raised when a Deserialize receives a value of the right type but that is wrong for some other reason. Read more Raised when deserializing a sequence or map and the input data contains too many or too few elements. Read more Raised when a Deserialize struct type expected to receive a required field with a particular name but that field was not present in the input. Read more
https://docs.rs/serde_json/1.0.20/serde_json/struct.Error.html
2018-06-17T21:58:04
CC-MAIN-2018-26
1529267859817.15
[]
docs.rs
References¶ Nanos gigantum humeris insidentes. Books and papers¶ Several books and articles are mentioned across the documentation and the source code itself. Here is the complete list in no particular order: - Vallado, David A., and Wayne D. McClain. Fundamentals of astrodynamics and applications. Vol. 12. Springer Science & Business Media, 2001. - Curtis, Howard. Orbital mechanics for engineering students. Butterworth-Heinemann, 2013. - Bate, Roger R., Donald D. Mueller, William W. Saylor, and Jerry E. White. Fundamentals of astrodynamics: (dover books on physics). Dover publications, 2013. - Battin, Richard H. An introduction to the mathematics and methods of astrodynamics. Aiaa, 1999. - Edelbaum, Theodore N. “Propulsion requirements for controllable satellites.” ARS Journal 31, no. 8 (1961): 1079-1089. - Walker, M. J. H., B. Ireland, and Joyce Owens. “A set modified equinoctial orbit elements.” Celestial Mechanics 36.4 (1985): 409-419. Software¶ poliastro wouldn’t be possible without the tremendous, often unpaid and unrecognised effort of thousands of volunteers who devote a significant part of their lives to provide the best software money can buy, for free. This is a list of direct poliastro dependencies with a citeable resource, which doesn’t account for the fact that I have used and enjoyed free (as in freedom) operative systems, compilers, text editors, IDEs and browsers for my whole academic life. - Van Der Walt, Stefan, S. Chris Colbert, and Gael Varoquaux. “The NumPy array: a structure for efficient numerical computation.” Computing in Science & Engineering 13, no. 2 (2011): 22-30. DOI:10.1109/MCSE.2011.37 - Jones, Eric, Travis Oliphant, and Pearu Peterson. “SciPy: Open Source Scientific Tools for Python”, 2001-, [Online; accessed 2015-12-12]. - Hunter✝, John D. “Matplotlib: A 2D graphics environment.” Computing in science and engineering 9, no. 3 (2007): 90-95. DOI:10.1109/MCSE.2007.55 - Pérez, Fernando, and Brian E. Granger. “IPython: a system for interactive scientific computing.” Computing in Science & Engineering 9, no. 3 (2007): 21-29. DOI:10.1109/MCSE.2007.53 - Robitaille, Thomas P., Erik J. Tollerud, Perry Greenfield, Michael Droettboom, Erik Bray, Tom Aldcroft, Matt Davis et al. “Astropy: A community Python package for astronomy.” Astronomy & Astrophysics 558 (2013): A33. DOI:10.1051/0004-6361/201322068
https://poliastro.readthedocs.io/en/latest/references.html
2018-06-17T22:05:18
CC-MAIN-2018-26
1529267859817.15
[]
poliastro.readthedocs.io
Once you have placed a server in a pool, it is shown as a pool member in the Resources pane, for example: When you add a server to a pool, XenCenter will attempt to resolve any pool configuration issues if possible: You can change the license of any pool members after joining the pool. The server with the lowest license determines the features available to all members in the pool. For more information about licensing, see About XenServer Licensing. Note that there may be other hardware or configuration issues that will prevent a server from successfully joining a pool: see Pool requirements for details of resource pool prerequisites.. See To change the management interface for information on how to do this.
https://docs.citrix.com/en-us/xencenter/6-5/xs-xc-pools/xs-xc-pools-add-host.html
2018-06-17T22:00:05
CC-MAIN-2018-26
1529267859817.15
[]
docs.citrix.com
Combo Chart in Power BI. Prerequisites Combo charts are available in Power BI service and Power BI Desktop. This tutorial uses Power BI service to create a Combo chart. To follow along, open Power BI service and connect to the "Retail Analysis" sample instructions below). Create a basic, single-axis, Combo Chart Watch Will create a combo chart using the Sales and Marketing sample. To create your own combo chart, sign in to Power BI service and select Get Data > Samples > Retail Analysis Sample > Connect >Go to dashboard. - From the "Retail Analysis Sample" dashboard, select the Total Stores tile to open the "Retail Analysis Sample" report. - Select Edit Report to open the report in Editing View. - Add a new report page. the ellipses (...) in the upper-right corner of the visualization, and select Sort by FiscalMonth. You may have to select it twice to sort ascending or descending. Convert the column chart to a combo chart. Month.. Add titles to the axes - Select the paint roller icon to open the Formatting pane. - Select the down arrow to expand the Y-axis options. For Y-Axis (Column), set Position to Left, set Title to On, Style to Show title only, and Display as Millions. Under Y-Axis (Column), scroll down and ensure that Show Secondary is set to On. This displays options for formatting the line chart portion of the combo chart. For Y-Axis (Line), leave Position as Right, turn Title to On, and set Style. Next steps Overview of visualizations in Power BI reports Visualization types in Power BI Power BI - Basic Concepts More questions? Try the Power BI Community
https://docs.microsoft.com/en-us/power-bi/power-bi-visualization-combo-chart
2018-06-17T21:50:13
CC-MAIN-2018-26
1529267859817.15
[]
docs.microsoft.com
Replace a Service Catalog form script with a widget You. Before you beginRole required: admin or sp_admin Procedure Create a widget that performs the action you would like to use in catalog item forms. See step 7 for a simple example widget that accesses another variable on the form. Open a catalog item that previously used a UI Macro or other reusable component not supported in Service Portal. In related lists, add a new variable to the catalog item. Configure the variable form to add the Widget field. In the Type field, select Macro. In the Widget field, select a widget that performs the desired action. (Optional) Use the $scope.page.g_form() or $scope.page.field syntax in the embedded widget to access the catalog item values. This example shows how to modify the value of a single-line text variable with the name color associated with the catalog item. Widget HTML Template <div> Data from catalog variable: <h1>{{ c.data.message }}</h1> </div> Widget client script function($scope) { var c = this; //Watch for changes in the color variable $scope.$watch(function () { return $scope.page.g_form.getValue('color'); }, function (value) { //Update local data object with data from variable c.data.message = value ? 'Content of color variable: ' + value : ''; }); } You can use the following to access variable or catalog item fields: $scope.page.g_form(): The g_form instance on the form. You can use all supported g_form methods described in Supported and unsupported client scripts. For example, g_form.setValue('variable_name', 'new value');. $scope.page.field(): The object that represents the variable. When you open the catalog item in the Service Portal, the embedded widget accesses the variable fields associated with the catalog item.
https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/build/service-portal/task/ui-macro-widget.html
2018-06-17T21:51:58
CC-MAIN-2018-26
1529267859817.15
[]
docs.servicenow.com
Caching¶ Buckaroo will cache files downloaded from the network and clones of Git repositories. This prevents duplicate work, and can even enable Buckaroo to work offline in some cases. Clearing the Cache¶ If you would like to reclaim some disc space, it is safe to delete the Buckaroo cache folder. The location of the cache folder will vary by platform:
https://buckaroo.readthedocs.io/en/latest/caching.html
2018-06-17T21:47:38
CC-MAIN-2018-26
1529267859817.15
[]
buckaroo.readthedocs.io
You configure the expression with the OR operator to check for the preceding three applications. If NetScaler Gateway detects the correct version of any of the applications on the user device, users are allowed to log on. The expression in the policy dialog box appears as follows: av_5_Symantec_10 || av_5_McAfeevirusscan_11 || av_5_sophos_4 For more information about compound expressions, see Configuring Compound Expressions.
https://docs.citrix.com/de-de/netscaler-gateway/11/vpn-user-config/endpoint-policies/ng-endpoint-preauthentication-config-tsk/ng-endpoint-expressions-multiple-tsk.html
2018-06-17T21:56:50
CC-MAIN-2018-26
1529267859817.15
[]
docs.citrix.com
Controls Controls allow you to customize how people interact with your maps. You can use them to overlay buttons, images, text and more! Interactive controls can be used to transform the current view's setting too, such as filter, focus, and clustering. Add controls through the Basic Editor You can use the Basic Editor to add a few simple types of controls to your map. Click the Settings icon to open the editor, then click MORE OPTIONS and select Add custom control. Kumu will open up the controls builder, with a few options pre-selected: Use the dropdown menus in the controls builder to set up your control, then, when you're done, click the back arrow to return to the main screen, and click SAVE. Add controls through the Advanced Editor To unlock the full set of flexible controls features, you can use the Advanced Editor. Here's an example of what controls look like in the Advanced Editor: @controls { top { showcase { by: "Element type"; } } } In general, controls are defined with the @controls block, grouped into regions, and customized using properties. You can add multiple controls to a region and even override the default controls built into Kumu. Here's the general syntax that shows how multiple regions can be used, and how multiple controls can be added to the same region: @controls { region { control { property: value; property: value; .... } another-control { property: value; property: value; ... } } another-region { some-other-control { ... } } } Regions Adding a custom control to your map starts by picking where you want to place the control. Controls can be assigned to one of six regions on the map: - top - top-left - top-right - bottom - bottom-left - bottom-right @controls { top-left { title { value: "This map has a title!"; } } } Properties Controls are customized using properties, and each control has its own unique set of properties that it accepts. In the example below, by is a property of the showcase control that accepts a field name (wrapped in quotes). @controls { bottom { showcase { by: "Element type"; } } } You'll want to read the individual guides (further below) to learn which options are available for each control. Children Sometimes controls need to work with complex lists of options. Since these would be overwhelming to define in a single line, the items are included as children of the control instead, and follow a similar syntax to the controls themselves. In the example below, we call the option blocks the "children" of the showcase control, and each child includes its own set of properties. @controls { top-left { showcase { option { label: "People"; selector: person; } option { label: "Orgs"; selector: organization; } } } } Available controls - Title control - Text control - Label control - Showcase control - Filter control - Cluster control - Tagged-timeline control - Color-legend control - Image control Advanced Overriding built-in controls All of Kumu's built-in controls (search, zoom buttons, settings buttons, and legend) are now handled by the same platform that custom controls are built on. That means you can move the built-in controls around, omit ones you don't need, or even reset the built-in controls and start from scratch. By default, @controls looks like this: @controls { top-left { search {} } top-right { zoom-toolbar {} settings-toolbar {} focus-toolbar {} } bottom-left { legend {} } } If you wanted to keep search but drop the others you could use: @controls { top-left { search {} } top-right {} bottom-left {} } Or, if you wanted to start from scratch without any of Kumu's built-in controls you could use something like: @controls { reset: true; top { title { value: "Check out my custom controls"; } } bottom { showcase { by: "Element type"; } } }
https://docs.kumu.io/guides/controls.html
2018-06-17T22:13:06
CC-MAIN-2018-26
1529267859817.15
[array(['../images/custom-controls-intro.png', 'Image of a custom control'], dtype=object) array(['../images/control-builder.png', 'controls builder ui'], dtype=object) ]
docs.kumu.io
absenteeism Absenteeism, see School -- Attendance -- Compulsory abuse Abuse, see Children -- Abuse and neglect ; Sex crimes academic fee Academic fee, see University of Wisconsin -- Tuition accident Accident, see Motor vehicle -- Accident accountant Accountant, see Certified public accountant accounting examining board Accounting Examining Board, see Certified public accountant acquired immunodeficiency syndrome Acquired immunodeficiency syndrome Mike Johnson Life Care and Early Intervention Services Grants: funding increased [Sec. 1785r] - Act 59 adams county Adams County High capacity well regulations revised re activities exempt from approval; DNR duties re designated study area - Act 10 adjutant general Adjutant General administration, department of _ agency and general functions Administration, Department of -- Agency and general functions Administrative rules and rule-making procedures: various changes - Act 57 Building Commission authority re thresholds for building trust fund, certain aspects of projects enumerated in the state building program, and approval of contracts for construction of or addition to certain building projects, State Fair Park Board provision; single prime contracting exception and project threshold, DOA duties; selection of project architects and engineers - Act 237 Enterprise resource planning system (STAR program) management: DOA reports of JCF and JCIPT required [Sec. 169t] [vetoed] - Act 59 Information technology and communication services self-funded portal: report required [Sec. 172] [vetoed] - Act 59 Pay for success contracting: DOA may use for eligible services to individuals, conditions and JCF approval required; DHS, DOC, DCF, and DWD to study if pay for success could be used for programs they administer; reports required - Act 267 Public housing adult residents: housing authorities to conduct employment screenings, create employability plans, and require able-bodied and unemployed or underemployed to complete a questionnaire re abuse of controlled substances - Act 265 School district employee health care: reports required [Sec. 74m, 1623g, r] - Act 59 Security services provided by DOA to state agencies at multitenant state-run facilities [Sec. 161, 449] - Act 59 Service award program for volunteer emergency response personnel: revisions re funding and vetting rules [Sec. 112-113d, 438, 9301 (6s)] - Act 59 Special prosecutor to assist DA: court may appoint in certain counties, DOA certification of backlog provision [Sec. 2261qm, rm, rt] - Act 59 State or political subdivision general sales prohibition: exceptions created; online auction provision - Act 65 Technology for educational achievement in Wisconsin: annual conference requirement eliminated [Sec. 174, 439] - Act 59 Telecommunications relay service administration transferred from DOA to PSC [Sec. 418, 1701, 1702, 9101 (4)] - Act 59 WC hearings: transferring FTE positions from DWD to DOA [Sec. 9151 (3)] - Act 59 administration, department of _ boards and other subdivisions Administration, Department of -- Boards and other subdivisions College Savings Program Board transferred from DOA to DFI; College Tuition and Expenses Program provision [Sec. 34, 114-117, 148-153, 193-195, 427-434, 528, 544-547, 1015, 1016, 1017, 1019-1023, 1704-1706, 2233, 2234, 9101 (2)] - Act 59 Elimination of Depository Selection Board [Sec. 31, 136, 499-502, 529, 586-591, 1703, 1931, 2212, 9101 (1)] - functions and FTE positions transferred from OCI to Division of Enterprise Technology in DOA [Sec. 9124 (1), 9424 (1)] - Act 59 Information technology study on services provided by Division of Enterprise Technology to OCI [Sec. 9101 (11c)] [vetoed] - Act 59 Interagency Council on Homelessness created; appropriation and report provisions - Act 74 Judicial compensation [Sec. 9101 (8f)] - Act 59 Longevity awards for DOC and DHS correctional officers and youth counselors: DPM to include in the 2017-19 state employee compensation plan [Sec. 1761p, 9101 (11w)] [vetoed] - Relocation assistance and operations within DOA moved to DOA legal services [Sec. 416, 422, 585m] - Act 59 administration, department of _ budget and fiscal issues Administration, Department of -- Budget and fiscal issues Automated justice information system appropriation repealed [Sec. 171, 421] - Act 59 Base budget review report: state agencies, the legislature, and the courts required to submit every even-numbered year; requirements specified - Act 212 Condemnation litigation expenses: threshold increased and adjusted annually based on the consumer price index [Sec. 585n-qm, 9310 (6w)] - Act 59 Connecting homeless individuals with permanent employment: DOA grants to municipalities and counties, preference provision [Sec. 133, 452] - Act 59 Cost-benefit analysis of lease and purchase options re executive branch agencies: DOA required to conduct and submit to Legislature, JCF provision [Sec. 161d, e, 9301 (2f)] [vetoed] - Act 59 Electric energy derived from renewable resources: appropriation created for DOA, DOC, DHS, DPI, DVA, and the UW System [Sec. 205, 206, 221, 222, 365, 366, 375, 376, 413, 414, 447, 448] - Fee report required with each agency budget request submitted to DOA [Sec. 139m] [vetoed] - Act 59 Goals for the UW System: Board of Regents to identify metrics to measure progress, outcomes-based funding formula and annual report to JCF required, innovation fund to increase enrollment in high-demand degree programs [Sec. 603m] [partial veto] - Act 59 Housing cost grants and loans: reasonable balance among geographic areas requirement repealed [Sec. 118] - Act 59 Information technology and procurement services positions: DOA report to JCF required [Sec. 9101 (11q)] [vetoed] - Act 59 Information technology permanent positions converted from contractor staff: DOA report to JCF required [Sec. 9101 (11s)] [vetoed] - Act 59 Interest on refunds issued for jobs tax credit, enterprise zone jobs credit, and business development credit: DOA and DOR prohibited from paying [Sec. 1036, 1084, 1109, 9338 (11)] - Act 59 Land information fund: appropriation for the land information program [Sec. 435, 436] - Act 59 Mental health services for homeless individuals: grant program transferred from DOA to DHS [Sec. 130-132, 451, 9101 (3)] - Act 59 National guard and state defense force members on state active duty: continuation of pay for injured members, death gratuity, and reemployment rights; DMA and DOA duties - Act 274 Personal property tax exemption for machinery, tools, and patterns; state aid payments to taxing jurisdictions, DOA and DOR duties [Sec. 480d, 632p, 984pg, q, 997j, 1210p, 1630d, 1635h, 1640d] - Act 59 Required general fund structural balance applies to executive budget bill or bills [Sec. 140k] [vetoed] - Act 59 Required general fund structural balance does not apply to legislation adopted in the 2017-18 legislative session [Sec. 9152 (1i)] - Act 59 School levy, lottery and gaming, and first dollar property tax credits: allowing a municipal ordinance having DOA distribute amounts directly to the municipality rather than the county [Sec. 1211] - Act 59 State leases of real property: DOA required to conduct a cost-benefit analysis and rent over certain amount subject to JCF approval - Act 132 Transfer 1 FTE position from Tour.Dept to DOA [Sec. 9144 (1)] - Act 59 ``Transitional housing grants" renamed ``housing grants" and 24 month limit repealed; grants to correspond with areas served by continuum of care organizations designated by HUD [Sec. 119-128, 450] - Act 59 VendorNet fund administration repealed [Sec. 424] - Act 59 Volkswagen settlement fund distributions: competitive transit capital assistance grants for replacement of public transit vehicles [Sec. 111, 484, 1210] [111 -- partial veto] - Act 59 Wisconsin Healthcare Stability Plan (WIHSP) created re state-based reinsurance program for health carriers, federal waiver required; OCI authority - Act 138 administration of estates Administration of estates, see Estate of deceased person administrative code Administrative code School bus previously titled or registered in another state or jurisdiction: administrative code repealed re prohibition on purchase of for use in school transportation [Admin.Code Trans 300.13 (intro.)] - Act 49 administrative rules Administrative rules, see also specific agency or department Administrative rule-making procedures: expedited procedure for repealing unauthorized rules; agency review of rules and legislative enactments; LRB biennial report required; retrospective economic impact analyses for existing rules - Act 108 Administrative rules and rule-making procedures: various changes - Act 57 Alternative education grants: DPI requirement and rules eliminated [Admin.Code PI 44] - Act 92 Aquaculture and fish farm regulations revised; emergency rule procedure exemption - Act 21 Background check changes re child care program, caregiver, and nonclient resident of the child care provider's home; changes to training requirement for Wisconsin Shares reimbursement; emergency rules [Sec. 380, 394, 397, 776-784, 786-833, 835a, 836-843, 845, 846, 849-854, 871-873, 876-880, 1626-1628, 2245, 9106 (1), 9406 (2)] - Act 59 Bicycles and motor bicycles operated on state trails during hours of darkness: DNR rules revised re lighting requirements [Admin.Code NR 45.05] - Act 301 Birth defect prevention and surveillance system changes; DHS duties specified; administrative code provisions [Admin.Code DHS 116.04, 116.05] [Sec. 1791e-u, 2266r-t] - Act 59 Clinical social worker: practice requirement for licensure modified; diagnostic manual provision - Act 356 Commission, board, credentialing board, or examining board (restricted agency) that has not promulgated a rule in 10 years or more may not take any action regarding rules unless authorized by subsequent law - Act 158 Cosmetology, aesthetics, manicuring, and barbering: practicing outside of a licensed establishment permitted under set conditions; separate licensure for managers and instructor certificate eliminated, use of certified instructor title restrictions - Act 82 County and municipal officers: removal at the pleasure of the appointing authority; administrative code provision [Admin.Code DHS 5.06] - Act 150 CPA revisions re educational requirements, continuing education, peer reviews, and interstate data-sharing programs [Admin.Code Accy 2.002, 2.101, 2.202, 2.303] - Act 88 DSPS may appoint certain local governments to approve building and plumbing plans and variances for public buildings and places of employment, administrative code provisions [Admin.Code SPS 302.31, 361.04, 361.60, 361.61] - Act 198 Electrical wiring code for certain housing review by DSPS; legal description for easements re sewer lines or facilities; development-related permits from more than one political subdivision; exemption from highway weight limits for vehicles delivering propane; review or report on bills and proposed administrative rules that affect housing modified; property owner rights regarding assessment; and maintenance and construction activities on certain structures under a county shoreland zoning ordinance - Act 68 Enhanced Nurse Licensure Compact ratified; emergency rules provision - Act 135 Fair employment law revisions re state and local government agencies denying a license based on arrest or conviction record; rule-making authority and emergency rule provisions - Act 278 Financial institution information disclosed to a Federal Home Loan Bank; injunctions against a Federal Home Loan Bank; periodic examinations of financial institutions by Division of Banking and Office of Credit Unions; limit on savings bank loans to one borrower; interest on residential mortgage loan escrow accounts; security of public deposits; capital reduction by state banks; insurance company liquidation proceedings; and exemption from overtime pay requirements for outside salespersons [Admin.Code DWD 274.04] - Act 340 Down Down /2017/related/acts_index/index true actindex /2017/related/acts_index/index/A/administration_department_of___boards_and_other_subdivisions actindex/2017/administration, department of _ boards and other subdivisions actindex/2017/administration, department of _ boards and other subdivisions
https://docs.legis.wisconsin.gov/2017/related/acts_index/index/A/administration_department_of___boards_and_other_subdivisions
2018-06-17T22:15:09
CC-MAIN-2018-26
1529267859817.15
[]
docs.legis.wisconsin.gov
Multiple currency model For price fields using the multiple currency model, you specify a fixed price based on locale. For example, in this model, we specify that the item costs USD 500, but GBP 450, regardless of current exchange rates. Figure 1. Multiple currency price If we then proceed to view this as a UK user, instead of seeing a price of about 365 pounds (which is what we'd get if we converted USD 500 into pounds at the time of this writing), we instead see the price of exactly GBP 450. Figure 2. Catalog item fixed price
https://docs.servicenow.com/bundle/helsinki-platform-administration/page/administer/localization/concept/c_MultipleCurrencyModel.html
2018-06-17T22:02:14
CC-MAIN-2018-26
1529267859817.15
[]
docs.servicenow.com
certbot.error_handler¶ Registers functions to be called if an exception or signal occurs. - class certbot.error_handler. ErrorHandler(func=None, *args, **kwargs)[source]¶ Context manager for running code that must be cleaned up on failure. The context manager allows you to register functions that will be called when an exception (excluding SystemExit) or signal is encountered. Usage: handler = ErrorHandler(cleanup1_func, *cleanup1_args, **cleanup1_kwargs) handler.register(cleanup2_func, *cleanup2_args, **cleanup2_kwargs) with handler: do_something() Or for one cleanup function: with ErrorHandler(func, args, kwargs): do_something() If an exception is raised out of do_something, the cleanup functions will be called in last in first out order. Then the exception is raised. Similarly, if a signal is encountered, the cleanup functions are called followed by the previously received signal handler. Each registered cleanup function is called exactly once. If a registered function raises an exception, it is logged and the next function is called. Signals received while the registered functions are executing are deferred until they finish. register(func, *args, **kwargs)[source]¶ Sets func to be run with the given arguments during cleanup. _signal_handler(signum, unused_frame)[source]¶ Replacement function for handling received signals. Store the received signal. If we are executing the code block in the body of the context manager, stop by raising signal exit. - class certbot.error_handler. ExitHandler(func=None, *args, **kwargs)[source]¶ Bases: certbot.error_handler.ErrorHandler Context manager for running code that must be cleaned up. Subclass of ErrorHandler, with the same usage and parameters. In addition to cleaning up on all signals, also cleans up on regular exit.
http://letsencrypt.readthedocs.io/en/latest/api/error_handler.html
2018-06-17T21:41:42
CC-MAIN-2018-26
1529267859817.15
[]
letsencrypt.readthedocs.io
Geolocation Many of today's apps are enhanced by the use of geolocation – wireless detection of the physical location of a remote device. These apps determine the user's position and use this data to enhance user experience. For example, apps can capture the exact location where a picture was taken or determine what businesses stored in the database to return to the user based on their current location. API Services provides a standard format for storing geolocation information in any entity, as well as syntax for querying that data based on distance from a latitude/longitude point. Saving location data in an entity In API Services, geolocation data is saved in the location property of an entity with latitude and longitude sub-properites in the following format: "location": { "latitude": <latitude_coordinate>, "longitude": <longitude_coordinate> } An entity's geolocation can be specified when the entity is created or added later by updating an existing entity. For example, the following entity describes a restaurant: { "uuid" : "03ae956a-249f-11e3-9f80-d16344f5a0e1", "type" : "restaurant", "name" : "Rockadero", "location": { "latitude": 37.779632, "longitude": -122.395131 } "created" : 1379975113142, "modified" : 1379975113142, "metadata" : { "path" : "/restaurants/03ae956a-249f-11e3-9f80-d16344f5a0e1" } Querying location data Location-aware apps require the ability to return content and results based on the user's current location. To easily enable this, API Services supports the following query parameter to retrieve entities within a specified distance of any geocoordinate based on its location property: location within <distance_in_meters> of <latitude>, <longitude> The returned results are sorted from nearest to furthest. Entities with the same location are returned in the order they were created. When using geolocation as part of a multi-parameter query, the location parameter must be the first parameter included in the query, e.g. location within <distance_in_meters> of <latitude>, <longitude> and status='active' The location parameter can be appended to any standard API Services query. For more information on how to query your API Services data, see Querying your data. For example, here is how you would find all the devices within 8,046 meters (~10 miles) of the center of San Francisco: curl -X GET within 8046 of 37.774989,-122.419413 -(NSString*)geoQuery { //instantiate the App Delegate class AppDelegate *appDelegate = (AppDelegate *)[ [UIApplication sharedApplication] delegate]; //specify the entity type that you want to query NSString *type = @"device"; //instantiate the ApigeeQuery class ApigeeQuery *qs = [[ApigeeQuery alloc] init]; //add the location query to the query [qs addRequiredWithin:@"location" latitude:37.774989 longitude:-122.419413 distance:16000.00]; //call getEntities to initiate the request ApigeeClientResponse *entities = [appDelegate.dataClient getEntities:type query:qs]; @try { //success } @catch (NSException * e) { //error } } //instantiate ApigeeClient and get an instance of ApigeeDataClient String ORGNAME = "your-org"; String APPNAME = "your-app"; ApigeeClient apigeeClient = new ApigeeClient(ORGNAME,APPNAME, getBaseContext()); ApigeeDataClient dataClient = apigeeClient.getDataClient(); //specify an entity type to retrieve String type = "device"; //entity type to be retrieved //specify our geo query, including distance, latitude and longitude String queryString = "location within 16000 of 37.774989, -122.419413"; //call getEntitiesAsync to initiate the asynchronous API call dataClient.getEntitiesAsync(type, queryString, new ApiResponseCallback() { @Override public void onException(Exception e) { // Error } @Override public void onResponse(ApiResponse response) { try { if (response != null) { // Success } } catch (Exception e) { //The API request returned an error // Fail } } }); var dataClient = new Apigee.Client({ orgName:'your-org', appName:'your-app' }); var options = { type:"item", //Required - the type of collection to be retrieved client:dataClient, qs:{ql:"location within 16000 of 37.774989, -122.419413"} }; //Create a collection object to hold the response var collection = new Apigee.Collection(options); //Call request to initiate the API call collection.fetch(function (error, response) { if (error) { //error } else { //success } }); #Create a client object usergrid_api = '' organization = 'your-org' application = 'your-app' dataClient = Usergrid::Application.new "#{usergrid_api}/#{organization}/#{application}" begin # Retrieve the collection by referencing the [type] # and save the response response = dataClient['devices'].query("location within 16000 of 37.774989, -122.419413").entity rescue #fail end var dataClient = new Usergrid.client({ orgName:'your-org', appName:'your-app' }); var options = { type:"item", //Required - the type of collection to be retrieved client:dataClient, qs:"location within 16000 of 37.774989, -122.419413" }; //Create a collection object to hold the response var collection = new Usergrid.collection(options); //Call request to initiate the API call collection.fetch(function (error, response) { if (error) { //error } else { //success } }); Enrich your app with location data Location-awareness has become a feature users expect in many types of mobile applications because of its ability to create a more personalized and relevant experience for each user. With this in mind, the geolocation feature in API Services was designed to work with many of the available default data entities to allow app developers to easily integrate powerful in-app features that can increase user engagement. Here are just a few of the ways that saving location data to a data entity can improve an app: Help or comments? - If something's not working: Ask the Apigee Community or see Apigee Support. - If something's wrong with the docs: Send Docs Feedback (Incorrect? Unclear? Broken link? Typo?)
http://docs.apigee.com/app-services/content/geolocation?rate=bUGjBTRujFsXX0IRB-0FDNc6Ue5sYKK-UBKnpLW-X8M
2016-02-06T03:03:31
CC-MAIN-2016-07
1454701145751.1
[]
docs.apigee.com
Difference between revisions of "Version History" From Joomla! Documentation Revision as of 06:02, 23 April 2013 This pages in this category contains more detailed version histories for all Joomla! CMS versions. > Cite error: <ref> tags exist, but no <references/> tag was found Pages in category ‘Version History’ The following 11 pages are in this category, out of 11 total.
https://docs.joomla.org/index.php?title=Category:Version_History&curid=28042&diff=85093&oldid=82438
2016-02-06T03:44:16
CC-MAIN-2016-07
1454701145751.1
[]
docs.joomla.org
Difference between revisions of "How do you set global preferences for content?" From Joomla! Documentation Revision as of 14:42, 1 September 2012 Global preferences in content are set in the article manager. In the backend, go to Content>Article manager. On the tool bar, second from the right, there is the options (in Joomla 1.5 preferences) icon. Click that and set your global preferences.
https://docs.joomla.org/index.php?title=How_do_you_set_global_preferences_for_content%3F&diff=73566&oldid=34080
2016-02-06T03:04:13
CC-MAIN-2016-07
1454701145751.1
[]
docs.joomla.org
All public logs Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive). - 15:30, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JDatabaseQueryMySQLi::group/11.1 to API17:JDatabaseQueryMySQLi::group without leaving a redirect (Robot: Moved page) - 19:37, 27 April 2011 Doxiki2 (Talk | contribs) automatically marked revision 56440 of page JDatabaseQueryMySQLi::group/11.1 patrolled
https://docs.joomla.org/index.php?title=Special:Log&page=JDatabaseQueryMySQLi%3A%3Agroup%2F11.1
2016-02-06T03:32:07
CC-MAIN-2016-07
1454701145751.1
[]
docs.joomla.org
Cafu is available under two licenses: The source code is in both cases exactly the same, and it depends on your goals and requirements which of the two licenses is the most suitable for you.: In summary, just let us know that you want a custom license, and what you want the license to look like. Mention any detail that is important to you. We're looking forward to work out the details together with you! All external libraries that are used in Cafu are legally compatible with closed-source use, and none of them imposes GPL-like restrictions of its own. See file LICENSE.txt for details, or ask us if you have any questions!
http://docs.cafu.de/cppdev:duallicensing
2016-02-06T02:50:58
CC-MAIN-2016-07
1454701145751.1
[]
docs.cafu.de
IWithOS Interface Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. The stage of the virtual machine scale set definition allowing to specify the operating system image. public interface IWithOS type IWithOS = interface Public Interface IWithOS - Derived -
https://docs.azure.cn/zh-cn/dotnet/api/microsoft.azure.management.compute.fluent.virtualmachinescaleset.definition.iwithos?view=azure-dotnet
2022-06-25T01:34:55
CC-MAIN-2022-27
1656103033925.2
[]
docs.azure.cn
Before saving data on the server you can validate it and assign a handler function for any gotten response. Shortly, validation contains 2 key points: To implement the server-side validation of incoming data you can use the following events: The beforeProcessing event fires for all types of operations while other events fire only for the related operations, i.e. you can set different validation rules for different operations or the same rule for all of them at once. The event will receive DataAction object as the parameter. This object can be used to retrieve the related data and allow/deny operation (please note, it contains only the data which has been received from the client side, not all the data related to the record). To check the value of a field you should use the method get_value() function validate($data){ if ($data->get_value("some_field")=="") ... } $conn->event->attach("beforeProcessing","validate"); In case of error you can choose one of two ways: 1) Use the predefined methods for error processing on the client side, i.e. set: The difference between the methods in question consists just in the way of highlighting. function validate($data){ if ($data->get_value("some")=="") $data->invalid(); } $conn->event->attach("beforeProcessing","validate"); 2) Assign your own processing on the client side through the dataProcessor's method defineAction() (see the details here) dp.defineAction("invalid",function(sid,response){ var message = response.getAttribute("message"); alert(message); return true;// return false to cancel default data processing at all }) You can change the returned status and set your own one by means of the method set_status(). In this case any default processing (no matter, if the defineAction() method will return true or false) will be cancelled and you will specify data processing fully on the client side. Server side: function validate($data){ if ($data->get_value("some")=="") $data->set_status("my_status") } $conn->event->attach("beforeProcessing","validate"); Client side: dp.defineAction("my_status",function(sid,response){ ... }) You can send some custom information to the client side through the following methods:
https://docs.dhtmlx.com/connector__php__validation.html
2022-06-25T02:38:10
CC-MAIN-2022-27
1656103033925.2
[]
docs.dhtmlx.com
Template:Backpanel:Klimax DS/2 - MAINS INPUT - To connect to the mains electricity supply. - EXAKT LINK - To connect to Exakt compatible devices (Exaktbox, Exakt Speakers, Urika II etc) - ETHERNET - To connect to a network (100Base-T) - RS232 PORTS - For connection to Linn source products without ethernet ports. - FALLBACK - Used when reprogramming the unit. -. EXAKT LINK connection Cables Exakt Link uses readily available network cables: - CAT 5 UTP/FTP - CAT 6 UTP/FTP Note: There is no change in the audio performance of the Exakt Link between Cat 5 or Cat 6 Cables. Configurations Exakt Systems can be configured either a star from the system master or as a daisy chain configuration. Examples of each configuration is shown below. Note: There is no change in audio performance between the star configuration or the daisy chain configuration. For more examples and information check HERE To enable Exakt options on the Exakt DS ensure that the internal volume control is turn ON in Konfig.
https://docs.linn.co.uk/wiki/index.php/Template:Backpanel:Klimax_DS/2
2022-06-25T02:13:24
CC-MAIN-2022-27
1656103033925.2
[]
docs.linn.co.uk
Performing an implicit data type conversion requires that an appropriate cast definition exists that specifies the AS ASSIGNMENT clause. SQL Engine performs implicit UDT-to-DATE If no UDT-to-DATE implicit cast definition exists, Vantage looks for other cast definitions that can substitute for the UDT-to-DATE implicit cast definition: Substitutions are valid because SQL Engine can use the implicit cast definition to cast the UDT to the substitute data type, and then implicitly cast the substitute data type to a DATE type.
https://docs.teradata.com/r/Teradata-VantageTM-Data-Types-and-Literals/July-2021/Data-Type-Conversions/UDT-to-DATE-Conversion/Usage-Notes/Implicit-Type-Conversion
2022-06-25T02:16:34
CC-MAIN-2022-27
1656103033925.2
[]
docs.teradata.com
This method is only used by Post FX Pipelines and those that extend from them. This method is called every time the postBatch method is called and is passed a reference to the current render target. At the very least a Post FX Pipeline should call this.bindAndDraw(renderTarget), however, you can do as much additional processing as you like in this method if you override it from within your own pipelines.
https://newdocs.phaser.io/docs/3.55.2/focus/Phaser.Renderer.WebGL.Pipelines.BitmapMaskPipeline-onDraw
2022-06-25T01:37:18
CC-MAIN-2022-27
1656103033925.2
[]
newdocs.phaser.io
.) Multi-step reactions and graphical charts also can be created by the Reaction tool. file in a reaction search.) In this case, only those file formats are available in the Export dialog which support reactions . Marvin JS provides the option to set map numbers to individual atoms. Unlike atom indices, map numbers remain constant during editing the molecule. Mapping can be useful if you want to identify corresponding atoms in the reactant and product side of a reaction. Atom map visibility should way are suitable for changing, or deleting the corresponding numbers. Click the Auto Map button. In this case every atom in the reaction would get atom map number automatically. Please, note that Auto-map function only works for structures which are in the same reaction. (In other words: there should.
https://docs.chemaxon.com/display/lts-europium/reactions-and-mechanisms-in-marvin-js.md
2022-06-25T01:15:11
CC-MAIN-2022-27
1656103033925.2
[]
docs.chemaxon.com
Collapsers are expandable elements that hide page content until you trigger it open. We use collapsers to hide content in very long documents, out of consideration for our readers. Each collapser has a title (what we show to readers), but also an id that we use for deep "anchor" links to specific collapsers. key sto show (open) all collapsers on the page. - Use CMD+F (or CTRL+F) to find in page and all the collapsers will open automatically.
https://docs.newrelic.com/kr/docs/style-guide/structure/collapsers/?q=
2022-06-25T01:10:49
CC-MAIN-2022-27
1656103033925.2
[]
docs.newrelic.com
- Functionality-related FAQs Should I use the app to add tracking information to Paypal while I can do it manually? It’s definitely worth trying our app even if you have a small number of orders. Our app’s plans are varied from 100 (free of charge) to unlimited orders. Let our app do its job precisely with no time, and you save hours to hours to optimize your sales. Is there any chance the reserve level. If I edit the tracking information on Shopify, will your app automatically edit on my Paypal account? Yes, when you update the information on Shopify, they will have an email notifying us and our application will update automatically to Paypal How do you calculate the orders in my subscription? Just the successfully submitted tracking orders are counted. The other one won't be counting on your subscription credits I have completed the same orders but not all tracking information is added. What happens? All orders with tracking information are of course transmitted to PayPal. Without tracking data, we could not post to paypal and will notify on the dashboard. If you add the following data, it will be transferred later. Does the app support Telecheckout? Yes Does the app support CheckoutX? No, because checkoutX does not send transaction_id paypal, there is not enough information to send tracking I sell digital products, how should I send tracking? You should send the download link to a buyer. We will send to the Other carrier with the link Why can't the Free Plan use the processing feature for old orders Well, "get old orders" feature is not accepted yet on Free plan. But just drop us an email request at to discuss. Can I send tracking information synchronized between Shopify and Paypal? Yes, you can process the orders from the last 60 days. Just click to: [Get Old Orders] on your admin dashboard then enter the date range you want, it may take a few minutes to hours to submit all orders. How are tracking information synchronized between Shopify and Paypal? Once your order is fulfilled and tracking information is there, we will be noticed by Shopify. Our software will submit this information automatically to Paypal. You will find this progress on your admin dashboard. Does the app support Intercart? Yes, our app does support Intercart checkout. The app shows over quota, why? Over quota means the number of orders submitted to PayPal is beyond your current plan limitation. Please consider upgrading to a higher plan to sync all orders. FAQs Documents - Previous SyncTrack - Security-related FAQs Next - FAQs Documents SyncTrack - General FAQs Last modified 8d ago Copy link Contents
https://docs.synctrack.io/welcome-to-synctrack/faqs-documents/synctrack-functionality-related-faqs
2022-06-25T01:29:02
CC-MAIN-2022-27
1656103033925.2
[]
docs.synctrack.io
DépouillementsAjouter le résultat dans votre panier Article : texte imprimé Breastfeeding Medicine and Black History Month: Studies on the African American Experience and Breastfeeding Breastfeeding Sahira Long, Auteur | In June 2019, after the U.S. Breastfeeding Committee African American/Black Identity Caucus meeting, a group of participants discussed the possibility of writing an article together on the topic of Black breastfeeding disparities and solutions t[...] Article : texte imprimé Black/African American Breastfeeding Experience: Cultural, Sociological, and Health Dimensions Through an Equity Lens Background: Disparities in breastfeeding (BF) continue to be a public health challenge, as currently only 42% of infants in the world and 25.6% of infants in the United States are exclusively breastfed for the first 6 months of life. In 2019, th[...] Article : texte imprimé For African American (AA) families on Chicagoland's South Side who choose to breastfeed, finding and receiving services needed to reach their goals are difficult. The disparities in breastfeeding support across Chicagoland are symptomatic of ine[...] Article : texte imprimé The Historical, Psychosocial, and Cultural Context of Breastfeeding in the African American Community Breastfeeding provides a range of benefits for the infant's growth, immunity, and development. It also has health benefits for the mother, including a reduced risk of premenopausal breast cancer, earlier return to prepregnancy weight, reduction [...] Article : texte imprimé Breastfeeding Communities for Fatherhood: Laying the Groundwork for the Black Fatherhood, Brotherhood, and Manhood Movement The role fathers play in the lives of their children is, as any behavior, dependent on their knowledge of factors influencing the health and safety of children and the societal context in which those fathers live, work, and worship. In the conte[...] Article : texte imprimé Background: Although exposure and personal experiences can guide breastfeeding decisions, the extant research on African American mothers is limited regarding the influence of infant feeding exposure. The persistent race-based breastfeeding disp[...] Article : texte imprimé #EveryGenerationMatters: Intergenerational Perceptions of Infant Feeding Information and Communication Among African American Women Objective: African American (AA) women look to their mother and maternal grandmother for parenting information and support; this intergenerational communication may reinforce or hinder breastfeeding practices. Rooted in Black Feminist Thought, t[...] Article : texte imprimé Objective: Disparities in U.S. breastfeeding rates persist among Black mothers according to birth country and between Black and White mothers, necessitating further investigation of modifiable mediating factors to inform interventions. This stud[...] Article : texte imprimé Racial Disparities in Sustaining Breastfeeding in a Baby-Friendly Designated Southeastern United States Hospital: An Opportunity to Investigate Systemic Racism Background: Racial disparities in breastfeeding rates persist in the United States with Black women having the lowest rates of initiation and continuation. A literature review attributes this to many factorshistorical roles, cultural norms, lac[...] Article : texte imprimé Background: Although breastfeeding is optimal infant nutrition, disparities in breastfeeding persist in the African American population. AMEN (Avondale Moms Empowered to Nurse) launched a Peer-to-Peer support group to increase breastfeeding init[...] Article : texte imprimé Breastfeeding Sisters That Are Receiving Support: Community-Based Peer Support Program Created for and by Women of Color Substantial racial disparities accounted for 66% of non-Hispanic Black mothers initiating breastfeeding in 2015 compared with 83% of non-Hispanic white mothers and 87% of Hispanic mothers in Tennessee. Created in 2015, Breastfeeding Sisters That[...]
https://docs.info-allaitement.org/opac_css/index.php?lvl=bulletin_display&id=447
2022-06-25T01:17:01
CC-MAIN-2022-27
1656103033925.2
[array(['https://docs.info-allaitement.org/opac_css/getimage.php?url_image=http%3A%2F%2Fimages-eu.amazon.com%2Fimages%2FP%2F%21%21isbn%21%21.08.MZZZZZZZ.jpg¬icecode=&entity_id=1900&vigurl=https%3A%2F%2Fwww.liebertpub.com%2Fna101%2Fhome%2Fliteratum%2Fpublisher%2Fmal%2Fjournals%2Fcontent%2Fbfm%2F2021%2Fbfm.2021.16.issue-2%2Fbfm.2021.16.issue-2%2F20210216%2Fbfm.2021.16.issue-2.cover.jpg', 'Breastfeeding Medicine and Black History Month: Studies on the African American Experience and Breastfeeding'], dtype=object) array(['https://docs.info-allaitement.org/opac_css/getimage.php?url_image=http%3A%2F%2Fimages-eu.amazon.com%2Fimages%2FP%2F%21%21isbn%21%21.08.MZZZZZZZ.jpg¬icecode=&entity_id=1901&vigurl=https%3A%2F%2Fwww.liebertpub.com%2Fna101%2Fhome%2Fliteratum%2Fpublisher%2Fmal%2Fjournals%2Fcontent%2Fbfm%2F2021%2Fbfm.2021.16.issue-2%2Fbfm.2021.16.issue-2%2F20210216%2Fbfm.2021.16.issue-2.cover.jpg', 'The Peculiar Indifference to Breastfeeding Disparities in the African American Community'], dtype=object) array(['https://docs.info-allaitement.org/opac_css/getimage.php?url_image=http%3A%2F%2Fimages-eu.amazon.com%2Fimages%2FP%2F%21%21isbn%21%21.08.MZZZZZZZ.jpg¬icecode=&entity_id=1902&vigurl=https%3A%2F%2Fwww.liebertpub.com%2Fna101%2Fhome%2Fliteratum%2Fpublisher%2Fmal%2Fjournals%2Fcontent%2Fbfm%2F2021%2Fbfm.2021.16.issue-2%2Fbfm.2021.16.issue-2%2F20210216%2Fbfm.2021.16.issue-2.cover.jpg', 'Black/African American Breastfeeding Experience: Cultural, Sociological, and Health Dimensions Through an Equity Lens'], dtype=object) array(['https://docs.info-allaitement.org/opac_css/getimage.php?url_image=http%3A%2F%2Fimages-eu.amazon.com%2Fimages%2FP%2F%21%21isbn%21%21.08.MZZZZZZZ.jpg¬icecode=&entity_id=1903&vigurl=https%3A%2F%2Fwww.liebertpub.com%2Fna101%2Fhome%2Fliteratum%2Fpublisher%2Fmal%2Fjournals%2Fcontent%2Fbfm%2F2021%2Fbfm.2021.16.issue-2%2Fbfm.2021.16.issue-2%2F20210216%2Fbfm.2021.16.issue-2.cover.jpg', "Structural Racism and Barriers to Breastfeeding on Chicagoland's South Side"], dtype=object) array(['https://docs.info-allaitement.org/opac_css/getimage.php?url_image=http%3A%2F%2Fimages-eu.amazon.com%2Fimages%2FP%2F%21%21isbn%21%21.08.MZZZZZZZ.jpg¬icecode=&entity_id=1905&vigurl=https%3A%2F%2Fwww.liebertpub.com%2Fna101%2Fhome%2Fliteratum%2Fpublisher%2Fmal%2Fjournals%2Fcontent%2Fbfm%2F2021%2Fbfm.2021.16.issue-2%2Fbfm.2021.16.issue-2%2F20210216%2Fbfm.2021.16.issue-2.cover.jpg', 'The Historical, Psychosocial, and Cultural Context of Breastfeeding in the African American Community'], dtype=object) array(['https://docs.info-allaitement.org/opac_css/getimage.php?url_image=http%3A%2F%2Fimages-eu.amazon.com%2Fimages%2FP%2F%21%21isbn%21%21.08.MZZZZZZZ.jpg¬icecode=&entity_id=1906&vigurl=https%3A%2F%2Fwww.liebertpub.com%2Fna101%2Fhome%2Fliteratum%2Fpublisher%2Fmal%2Fjournals%2Fcontent%2Fbfm%2F2021%2Fbfm.2021.16.issue-2%2Fbfm.2021.16.issue-2%2F20210216%2Fbfm.2021.16.issue-2.cover.jpg', 'Breastfeeding Communities for Fatherhood: Laying the Groundwork for the Black Fatherhood, Brotherhood, and Manhood Movement'], dtype=object) array(['https://docs.info-allaitement.org/opac_css/getimage.php?url_image=http%3A%2F%2Fimages-eu.amazon.com%2Fimages%2FP%2F%21%21isbn%21%21.08.MZZZZZZZ.jpg¬icecode=&entity_id=1907&vigurl=https%3A%2F%2Fwww.liebertpub.com%2Fna101%2Fhome%2Fliteratum%2Fpublisher%2Fmal%2Fjournals%2Fcontent%2Fbfm%2F2021%2Fbfm.2021.16.issue-2%2Fbfm.2021.16.issue-2%2F20210216%2Fbfm.2021.16.issue-2.cover.jpg', 'Infant Feeding Exposure and Personal Experiences of African American Mothers'], dtype=object) array(['https://docs.info-allaitement.org/opac_css/getimage.php?url_image=http%3A%2F%2Fimages-eu.amazon.com%2Fimages%2FP%2F%21%21isbn%21%21.08.MZZZZZZZ.jpg¬icecode=&entity_id=1908&vigurl=https%3A%2F%2Fwww.liebertpub.com%2Fna101%2Fhome%2Fliteratum%2Fpublisher%2Fmal%2Fjournals%2Fcontent%2Fbfm%2F2021%2Fbfm.2021.16.issue-2%2Fbfm.2021.16.issue-2%2F20210216%2Fbfm.2021.16.issue-2.cover.jpg', '#EveryGenerationMatters: Intergenerational Perceptions of Infant Feeding Information and Communication Among African American Women'], dtype=object) array(['https://docs.info-allaitement.org/opac_css/getimage.php?url_image=http%3A%2F%2Fimages-eu.amazon.com%2Fimages%2FP%2F%21%21isbn%21%21.08.MZZZZZZZ.jpg¬icecode=&entity_id=1909&vigurl=https%3A%2F%2Fwww.liebertpub.com%2Fna101%2Fhome%2Fliteratum%2Fpublisher%2Fmal%2Fjournals%2Fcontent%2Fbfm%2F2021%2Fbfm.2021.16.issue-2%2Fbfm.2021.16.issue-2%2F20210216%2Fbfm.2021.16.issue-2.cover.jpg', 'Disparities in Breastfeeding Among U.S. Black Mothers: Identification of Mechanisms'], dtype=object) array(['https://docs.info-allaitement.org/opac_css/getimage.php?url_image=http%3A%2F%2Fimages-eu.amazon.com%2Fimages%2FP%2F%21%21isbn%21%21.08.MZZZZZZZ.jpg¬icecode=&entity_id=1910&vigurl=https%3A%2F%2Fwww.liebertpub.com%2Fna101%2Fhome%2Fliteratum%2Fpublisher%2Fmal%2Fjournals%2Fcontent%2Fbfm%2F2021%2Fbfm.2021.16.issue-2%2Fbfm.2021.16.issue-2%2F20210216%2Fbfm.2021.16.issue-2.cover.jpg', 'Racial Disparities in Sustaining Breastfeeding in a Baby-Friendly Designated Southeastern United States Hospital: An Opportunity to Investigate Systemic Racism'], dtype=object) array(['https://docs.info-allaitement.org/opac_css/getimage.php?url_image=http%3A%2F%2Fimages-eu.amazon.com%2Fimages%2FP%2F%21%21isbn%21%21.08.MZZZZZZZ.jpg¬icecode=&entity_id=1912&vigurl=https%3A%2F%2Fwww.liebertpub.com%2Fna101%2Fhome%2Fliteratum%2Fpublisher%2Fmal%2Fjournals%2Fcontent%2Fbfm%2F2021%2Fbfm.2021.16.issue-2%2Fbfm.2021.16.issue-2%2F20210216%2Fbfm.2021.16.issue-2.cover.jpg', 'African American Breastfeeding Peer Support: All Moms Empowered to Nurse'], dtype=object) array(['https://docs.info-allaitement.org/opac_css/getimage.php?url_image=http%3A%2F%2Fimages-eu.amazon.com%2Fimages%2FP%2F%21%21isbn%21%21.08.MZZZZZZZ.jpg¬icecode=&entity_id=1913&vigurl=https%3A%2F%2Fwww.liebertpub.com%2Fna101%2Fhome%2Fliteratum%2Fpublisher%2Fmal%2Fjournals%2Fcontent%2Fbfm%2F2021%2Fbfm.2021.16.issue-2%2Fbfm.2021.16.issue-2%2F20210216%2Fbfm.2021.16.issue-2.cover.jpg', 'Breastfeeding Sisters That Are Receiving Support: Community-Based Peer Support Program Created for and by Women of Color'], dtype=object) ]
docs.info-allaitement.org
Converts unsigned ints into readable (ascii) characters. More... Converts unsigned ints into readable (ascii) characters. Definition at line 41 of file char.hpp. Reimplemented in ecl::Converter< char, void >. Definition at line 43 of file char.hpp. Converts a single unsigned int into a char type. A similar function is used in stlsoft. This throws an exception and/or configures the error() function for this converter if the input digit is not a char digit ['0'-'9'] Definition at line 57 of file char.hpp.
http://docs.ros.org/en/jade/api/ecl_converters/html/classecl_1_1Converter_3_01char_00_01unsigned_01int_01_4.html
2022-06-25T02:14:59
CC-MAIN-2022-27
1656103033925.2
[]
docs.ros.org
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Get-KMSKeyPolicyList-KeyId <String>-Limit <Int32>-Marker <String>-Select <String>-PassThru <SwitchParameter>-NoAutoIteration <SwitchParameter> default. Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account. Required permissions: kms:ListKeyPolicies (key policy) Related operations: 1234abcd-12ab-34cd-56ef-1234567890ab arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab NextMarkerfrom the truncated response you just received. AWS Tools for PowerShell: 2.x.y.z
https://docs.aws.amazon.com/powershell/latest/reference/items/Get-KMSKeyPolicyList.html
2022-06-25T02:47:18
CC-MAIN-2022-27
1656103033925.2
[]
docs.aws.amazon.com
serializes data in the Action Message Format XML (AMFX) format. It can write simple and complex objects, to be used in conjunction with an AMFX-compliant server. To create an encoded XMl, first construct an Encoder: var encoder = Ext.create('Ext.data.amf.XmlEncoder'); Then use the writer methods to out data to the : encoder.writeObject(1); encoder.writeObject({a: "b"}); And access the data through the #bytes property: encoder.body; You can also reset the class to start a new body: encoder.clear(); Current limitations: AMF3 format (format:3) For more information on working with AMF data please refer to the AMF Guide. The output string Defaults to: "" Clears the accumulated data, starting with an empty string Creates new encoder. config : Object Configuration options Converts an XML Document object to a string. xml : Object XML document to convert (typically Document object) A string representing the document Encodes an AMFX remoting message with the AMFX envelope. message : Ext.data.amf.RemotingMessage the message to pass on to serialize. Encodes an array, marking it as an ECMA array if it has associative (non-ordinal) indices array : Array the array to encode Returns an encoded boolean val : Boolean a boolean value Encodes a byte arrat in AMFX format array : Array the byte array to encode Encode a date date : Date the date to encode Returns an encoded double num : Number the double to encode Encodes one ECMA array element key : String the name of the element value : Object the value of the element the encoded key-value pair Encodes a generic object into AMFX format. If a $flexType member is defined, list that as the object type. obj : Object the object to encode the encoded text Returns an encoded int num : Number the integer to encode Returns the encoding for null Returns an encoded number. Decides wheter to use int or double encoding. num : Number the number to encode encode the appropriate data item. Supported types: item : Object A primitive or object to write to the stream the encoded object in AMFX format Returns an encoded string str : String the string to encode Returns the encoding for undefined (which is the same as the encoding for null) Encodes an xml document into a CDATA section xml : XMLElement/HTMLElement an XML document or element (Document type in some browsers) Tries to determine if an object is an XML document item : Object to identify true if it's an XML document, false otherwise Appends a string to the body of the message str : String the string to append Writes an AMFX remoting message with the AMFX envelope to the string. message : Ext.data.amf.RemotingMessage the message to pass on to serialize. Writes an array to the string, marking it as an ECMA array if it has associative (non-ordinal) indices array : Array the array to encode Writes a boolean value to the string val : Boolean a boolean value Writes an AMFX byte array to the string. This is for convenience only and is not called automatically by writeObject. array : Array the byte array to encode Write a date to the string date : Date the date to encode Writes a double tag with the content. num : Number the double to encode Writes a generic object to the string. If a $flexType member is defined, list that as the object type. obj : Object the object to encode Writes a int tag with the content. num : Number the integer to encode Writes the null value to the string Writes a number, deciding if to use int or double as the tag num : Number the number to encode Writes the appropriate data item to the string. Supported types: item : Object A primitive or object to write to the stream Writes a string tag with the string content. str : String the string to encode Writes the undefined value to the string Write an XML document to the string xml : XMLElement/HTMLElement an XML document or element (Document type in some browsers) Utility function to generate a flex-friendly UID id : Number used in the first 8 chars of the id. If not provided, a random number will be used. a string-encoded opaque UID
https://docs.sencha.com/extjs/6.2.1/modern/Ext.data.amf.XmlEncoder.html
2022-06-25T01:25:20
CC-MAIN-2022-27
1656103033925.2
[]
docs.sencha.com
View user information Review useful information about a user's activities in your environment by using the User Facts page: - Select Users from the Home page, or select Explore > Users from the menu to access the Users Table. - Select any individual user. At the top of each user's view, you can: - Review the Last Update date and time to see how recent the information for this user or account is. - Add the user to a Watchlist to track the account more closely. - For users, Account shows the type of user accounts associated with the user, such as Normal, Admin, or Machine. Monitor a user with a watchlist You can use watchlists to track specific users or accounts more closely. Review the activity of users or accounts on watchlists on the Threats Dashboard, Anomalies Dashboard, and Users Dashboard. See Investigate Splunk UBA entities using watchlists for more information about watchlists. Adding a user to a watchlist does not change the behavior of the models generating anomalies or threats, but you can use watchlists to create threats or anomalies based on the behavior of users or accounts added to watchlists. See Monitor policy violations with custom threats to create a custom threat, or Take action on anomalies with anomaly action rules to create an anomaly rule. Add a user to a watchlist To monitor a user more closely, add them to a watchlist. - Click Watchlists. - Click New User Watchlist to add a custom watchlist, or click User Watchlist to add the user to the default watchlist. - Enter a Name for the watchlist. - Click OK to save. Remove a user from a watchlist Stop monitoring a user and remove them from a watchlist. - Click Watchlists. - Click the name of the watchlist the user is on to remove them. User Information Use the User Facts view about a user to understand key aspects of a specific user. To open user facts about a user, click a user's name from any view in Splunk UBA. For example, select Explore > Users and click the name of a user. You can also click when viewing other information about a user, such as the threats, anomalies, or accounts for a user. Key indicators Get an overview of the user's activity with overall counts for various key indicators of user activity. If the count is zero, you cannot click through to view more details. - Number of Threats associated with the user account. Click to view User Threats. - Number of Anomalies associated with the user account. Click to view User Anomalies. Learn more about user role information Understand the role of the user in your environment with several panels that allow you to easily review basic user information. - Employee ID number - The user's Organizational Unit (OU) or department - Contact information like a phone number or email address - Login ID - Location data like City, State, and Country Review environment activity See which threats, anomalies, devices, and domains associated with the user. - See threats associated with the user and the risk scores for each threat. Click to view the User Threats page. - See anomalies associated with the user and the risk scores for each anomaly. Click to view the User Anomalies page. - See the devices involved with the user's anomalies. Both internal and external devices are listed by IP address, with the risk score associated with each one. Click to see Device Information for a listed device. - See the domains involved with the user's anomalies. Click to see Domain Information for a listed domain. Understand user score trend The User Score Trend panel displays risk score changes over time. Understand user risk scores A user's risk score is determined primarily by the number of anomalies and threats associated with that user. A higher number of anomalies or threats of the same type will result in a higher user risk score. Models and rules in Splunk UBA use varying algorithms to generate a user's risk score, meaning that not all anomaly types carry the same weight. For example, a user with five anomalies of a certain type may have a different risk score from another user with five anomalies of a different type. Understand user risk percentiles Risk percentiles on an individual user's page help you better understand the risk that the user poses to your organization in comparison to other users. Risk percentiles are generated based on a predefined set of activities and are completely separate from risk scores. Risk percentiles are not used in determining a user's risk score, nor are they based on any anomalies or threats. For example, suppose a user has visited suspicious sites 5 times in the last 30 days, while no one else in his peer group of 100 users has visited any suspicious sites more than twice in the same time span. The user would be in the 100th risk percentile compared to his peers. However, visiting this suspicious site five times in one month is not enough to generate any anomalies, which means his activity has no bearing at all on his risk score. If the user did have a high risk score, the risk score would be based on other anomalies or threats generated against this user. Splunk UBA tracks the following percentiles on the user page: - External risk. See Understand user external risk percentile. - Internal risk. See Understand user internal risk percentile. Understand user external risk percentile The external risk percentile examines user behavior related to external systems, such as types of website visits and network connections that cross a firewall. Review the External Risk Summary panel to quickly identify the elements contributing to a user's external risk percentile. Several elements of a user's activity contribute to their risk percentile. Use the External Risk Details panel to see more details contributing to the external risk percentile for the user. - Blacklisted site visits. Excluded sites are defined by the domain deny list. - Suspicious site visits. Suspicious sites are defined by a list of suspicious IP addresses included with Splunk UBA. - Denied connections. User activity resulting in connections denied by a firewall is based on aggregate events from firewall data in Splunk UBA. - Adware site visits. Adware sites are defined by a list of adware sites included with Splunk UBA. - Tor usage. Use of Tor is identified with the IntelTorIPList. - Risky connections. User activity resulting in a risky connection is based on aggregate events from firewall data in Splunk UBA. Understand user insider risk percentile The insider risk percentile examines user behavior related to internal systems, such as login attempts and failures, the different machines that a person uses, and internal application use. Review the Insider Risk Summary panel to quickly identify the elements contributing to a user's insider risk percentile. Use the Insider Risk Details panel to understand the specific behaviors contributing to the user's insider risk percentile. - Job search site visits. Job search site visits are identified by aggregate events from the data in Splunk UBA. - Amount of outbound traffic. Outbound traffic amounts are based on aggregate events from firewall data in Splunk UBA. - Failed logins. Login failures identified by the AD authentication system. - Login attempts. Attempts to log on to a system, whether they were successful or failed. A high number of login attempts could indicate elevated risk of account takeover by an attacker. - Distinct devices usage. The number of devices or machines that a person logs on to can indicate the level of risk they pose to the organization internally. A person who logs on to one or two devices regularly makes it easier to quarantine their account if compromised, but users that regularly log on to a high number of machines could increase the attack surface of an attacker who compromises their account. - Denied access. The number of times that someone logs in to a device or application to which they do not have access. This can indicate someone attempting to maliciously access applications or data that they do not have authorization to use or view. - Risky applications per usage. This percentile score is weighted by the number of times a person uses a risky application. If a person uses three risky applications several times a day, their risk percentile will be high compared with the rest of the organization if most people in the organization use one risky application once a month. - Account lockouts. The number of times that a person's account is locked in the AD authentication system. A person whose account is locked out often can indicate login problems, attempts by an attacker to brute force their password, or other malicious issues. Review user activity Review the user's information transfer activity with the Data Transfer by Source Device and HTTP Transfer by Domain panels. - See how much data is transferred and to which device IP addresses in the Data Transfer by Device panel. Click to view the Devices Table filtered on the selected device and user. - See how much data is transferred and to which domain names in the HTTP Transfer by Domain panel. Click to view the Domains Table filtered on the selected domain and user. Monitor the user's login activity on the Logins by Destination Device and Logins by Session Type panels. - Review user login activity by server IP address on the Logins by Destination Device panel. Click to view the Logins Table filtered by the selected server IP address and user. - Review user session logins on the Logins by Session Type panel. See how many sessions were VPN or other session types. Click to view the Logins Table filtered by the selected session type and user. Understand typical user activity with the User Events Trend panel; consider spikes in this panel to spot spikes of user activity that could be suspicious. Review the Device Attributions to see which devices are associated with this user account. Click a device to see the original event or events that tied this user or user account to a specific device in a particular time period. User Anomalies Open the anomalies view for a specific user: - Select Users from the Home page, or select Explore > Users from the menu to access the Users Table. - Select an individual user. - Click the icon, or click Anomalies near the top of the User Facts section. The user anomalies view allows you to see all anomalies associated with the selected user. - The User Anomalies Timeline panel shows the types and risk levels of anomalies associated with a user over time. Click an anomaly to view the selected anomaly on the anomalies table. - The User Anomalies Trend panel shows the number of anomalies associated with the user over time. Click the graph to view the selected anomaly on the anomalies table. - The User Anomalies table shows all anomalies associated with the user. You can sort the table by anomaly type, event date, risk score, or anomaly summary. Download the anomalies associated with the user in a CSV file by clicking the Save as CSV icon. Click an anomaly to see the Anomaly Details. User Threats Open the threats view for a user: - Select Users from the Home page, or select Explore > Users from the menu to access the Users Table. - Select an individual user. - Click the icon, or click Threats at the top of the User Facts section. The user threats view allows you to see all threats associated with the selected user. - The User Threats Timeline panel shows the threats associated with a user over time. Click to view the specific threat on the Threats Table. - The User Threats table shows all threats associated with a user. Review the threat type, participants, start date, last update, and risk score. You can sort the table by column values, or save the table as a CSV by clicking the Save as CSV icon. Click a threat to view the Threat Details. User Sessions To view the sessions associated with a user: - Select Users from the Home page, or select Explore > Users from the menu to access the Users Table. - Select an individual user. - Click the the icon. - Scroll down to see the Logins by Session Type panel. This panel summarizes all sessions associated with the selected user. Click on View Details to see more information and to further filter the output. User Accounts View account information for an individual user: - Select Users from the Home page, or select Explore > Users from the menu to access the Users Table. - Select an individual user. - Click the icon. You can also view the accounts view for a user after clicking a name for more information from the Peer Groups page. Review the account information associated with a user to determine if the user has multiple accounts, or whether they have any privileged accounts. If a user has multiple accounts, you can view the account type, Active Directory (AD) group memberships, and account status for each account. This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.4.1, 5.0.5, 5.0.5.1 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/UBA/5.0.5.1/User/UserInfo
2022-06-25T02:00:31
CC-MAIN-2022-27
1656103033925.2
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Keeping your online store updated with relevant graphics and content can become a tedious task for store owners.. Simply pick the date and time for your theme to be published, and leave the rest to us. Features - Schedule a date and time you'd like your new theme to go live. - Monitor your scheduled change and get updates when published. - Edit your scheduled change to make updates. - Review change logs. - Unlimited scheduling. Key Benefits - Start sales & promotions on time, everytime. - Consistent site updates lead to more consistent search engine indexing updates. - Automate the busy work. No more waiting until the late hours of the night or early hours of the morning to publish new themes. - Monitor your theme updates and get updated any time something goes wrong. - Save time by scheduling unlimited theme changes.
https://docs.minionmade.com/theme-on-time-app/schedule-theme-changes-on-shopify
2022-06-25T01:18:43
CC-MAIN-2022-27
1656103033925.2
[]
docs.minionmade.com
The mssql_user module is used to create and manage SQL Server Users frank: mssql_user.present: - database: yolo salt.states.mssql_user. absent(name, **kwargs)¶ Ensure that the named user is absent The username of the user to remove salt.states.mssql_user. present(name, login=None, domain=None, database=None, roles=None, options=None, **kwargs)¶ Checks existence of the named user. If not present, creates the user with the specified roles and options. The name of the user to manage If not specified, will be created WITHOUT LOGIN Creates a Windows authentication user. Needs to be NetBIOS domain or hostname The database of the user (not the login) Add this user to all the
https://docs.saltproject.io/en/3001/ref/states/all/salt.states.mssql_user.html
2022-06-25T01:32:41
CC-MAIN-2022-27
1656103033925.2
[]
docs.saltproject.io