content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
The Sync Client stores your data locally on your PC and synchronizes every change of a file or folder. You can access your folders and files even without the Internet. The Drive Client, on the other hand, integrates a network to free up local storage space. Here, too, every change to a file or folder is automatically applied. However, an Internet connection is required here.
In order to be able to work on a document with team colleagues or external partners at the same time and track all changes in real time, it is a good idea to create a share link. With this you can easily share files with non-registered users and edit them simultaneously.
To create a share link, click on "Share" next to the respective folder or file in the web interface. You can now copy the link or forward it by mail to the selected person.
In luckycloud, a "user" is someone who has a luckycloud account and has access to the luckycloud. A user can be assigned the following roles by the administrator:
At luckycloud, an "administrator" is the person who handles payment, account and team management. An administrator account also has access to the cloud storage of luckycloud and can do everything a user can do.
In the web interface you will find the Tools tab on the left side and below it the Activities item. Here you will see all activities of all users on files and folders sorted by date, each with the corresponding time.
For each change, a snapshot is created that contains the state of the library after the change. If you accidentally delete a file, you can easily restore it. To do this, go to the library and click on the clock (Versions) in the top right-hand corner, where you will also see an overview of all the files that have been changed. Under Action, the option View snapshot appears via mouse-over. Here you can download any version and also restore it (also via mouse-over under Action.)
Sometimes other users and you may be editing the same file at the same time. Your changes may conflict with the changes made by others. In this case, your change is saved while your friends' changes are saved as conflict files. These files end with the author's email address and the current time - for example, test.txt ([email protected] 2013-10-01-00-12-24). This gives you the opportunity to manually view and adjust the changes.
In the Team Manager click the "edit" gear. Here you can customize the settings for each individual user. In the private storage field, you have the option to assign storage quotas. These should not exceed the available team storage size, otherwise you can always make an adjustment in the configurator.
In the web interface, under Tools, click Published Library and then click the Publish Library button.
Deleted data in the recycle bin and versions also occupy disk space. We recommend that you clean up your data.
Attention: Irrevocable deletion. It is no longer possible to restore the deleted versions!
If you delete a user, all luckycloud services will be deactivated with immediate effect and the data stored there will be completely deleted. To add this user again, please contact our customer support at or +49 30 814 570 920.
An encrypted library cannot be decrypted again. Encrypting a library is a one-time process, like a fingerprint, that is irreversible. You must upload the entire encrypted library's data to a new library to "decrypt" or re-encrypt it.
Within a browser, a single file for upload/download can be a maximum of 4 GB. This counts for your storage web interface as well as for external share links.
If you need to upload or download larger files, please use our Sync- or Drive-Client.
Important: When downloading a folder it will automatically be converted to a "zip file", this must also not exceed the 4GB limit.
You can access our service using WebDAV, we no longer offer SFTP for security reasons.
When setting up a recurring payment via PayPal, you allow luckycloud to debit your Paypal account for a period of 12 months. If the direct debit agreement expires, you must enter into a new agreement or set up payment at the currently applicable prices. We recommend using SEPA direct debit or credit card.
In a nutshell: With us you are a customer - not a product. We do not sell your data to third parties. As a zero-knowledge cloud, we don't touch your data. Unlike other cloud providers, with us you can only pay with money. The focus at luckycloud is on secure, privacy-friendly and highly available data processing and customer-oriented services. Behind our support channels sit not only "real people", but trained, German-speaking employees.
Other cloud providers may be a bit cheaper, but then you have to reckon with security and performance level losses. You alone must decide,
Free support packages
The "Basic Support" is valid for all luckycloud customers.
The "Premium Support" can be used by all luckycloud pro users. System critical requests from luckycloud pro users can only be made by administrators.
luckycloud SLA contracts
luckycloud customers can conclude separate SLA contracts. Here the system admi-nistrators can contact 2nd and 3rd level support at predefined times.
More about both can be found in our support offer whitepaper.
Experience has shown that it is more useful for specific problems if we can keep an eye on them in person. In this case, a remote support appointment is worthwhile. During this appointment, one of our employees will look at your system with you via Anydesk. This way we can achieve the best service and help you quickly and competently.
You can contact us at any time via phone, email or our chat and ask for an appointment, you will then be assigned an appointment as soon as possible.
We charge 22,50€/15min for our remote support, unless it is an error on our part.
If you do not have Anydesk installed, you can download the software here:
Restart your computer.
have your luckycloud login details ready
if necessary: Have your router IP ready
Since MacOS 12, Anydesk permissions must be given for the software to work. 1.
For NAS servers, please have the IP address of your NAS and the admin password ready. | https://docs.luckycloud.de/en/faq | 2022-06-25T04:57:35 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.luckycloud.de |
>> index: 7.0.1
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/ES/7.0.1/User/Audit | 2022-06-25T04:53:09 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Certain materials on this website have been translated using machine-assisted translation software/tools. Machine-assisted translations of any materials into languages other than English are intended solely as a convenience to the non-English-reading users and are not legally binding. Anybody relying on such information does so at his or her own risk. No automated translation is perfect nor is it intended to replace human translators. Teradata does not make any promises, assurances, or guarantees as to the accuracy of the machine-assisted translations provided. Teradata accepts no responsibility and shall not be liable for any damage or issues that may result from using such translations. Users are reminded to use the English contents. | https://docs.teradata.com/r/Product-Safety-Trademark-and-Warranty-Considerations/July-2020/Product-Safety-Trademark-and-Warranty-Considerations/Machine-Assisted-Translation | 2022-06-25T05:23:39 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.teradata.com |
Once you've determined that a prospective customer wants to buy Cisco Umbrella, you need to convert the trial to a paid subscription. You do this through the Cisco Commerce Workspace (CCW). To successfully convert the trial, the Partner trial must be configured with the latest Deal ID AND the Trial ID key from the Partner console must be present in the deal/order.
If you started a trial without a Deal ID, you must acquire a Deal ID from CCW and update the Trial Management page before converting a trial to a subscription. For more information about the Deal ID, see the Cisco Commerce User Guide.
The Deal ID may have changed if the CCW quote was updated at any time during the trial period. If the Deal ID has changed, make sure that you update the Trial Management page with the latest Deal ID.
Note: You need the trial's Trial ID when you convert the trial to a subscription through CCW. The Trial ID is added to CCW as part of the conversion processes.
When an order for a trial is completed in CCW, the trial is converted to a paid Umbrella subscription, and the customer's Umbrella dashboard is automatically separated from the Partner console—the trial is no longer listed in the Partner console. The customer's Umbrella dashboard maintains all policies, settings, and admins added during the trial.
For more information about converting a trial, see How to convert the trial to a sale.
Request Access to Umbrella Dashboard < Convert a Trial to an Umbrella Subscription > Manage Renewals
Updated 2 years ago | https://docs.umbrella.com/partner-deployment/docs/trial-convert-a-trial-to-an-umbrella-subscription | 2022-06-25T05:41:41 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.umbrella.com |
If a failure occurs, a system administrator restores the vRealize Automation appliance. If a load balancer is used, the administrator restores the load balancer and the virtual appliances that it manages. For vRealize Automation 8.x, you cannot change the host names during restoration.
You might need to restore a failed virtual appliance in the following circumstances:
- You are running a minimal deployment and your only vRealize Automation appliance fails or becomes corrupted.
- You are running a distributed deployment and some, but not all, virtual appliances fail.
- You are running a distributed deployment and all virtual appliances fail.
How you restore a vRealize Automation appliance or virtual appliance load balancer depends on your deployment type and on which appliances failed.
- If you are using a single virtual appliance whose name is unchanged, restore the virtual appliance. No further steps are required.
- If you are running a distributed deployment that uses a load balancer, notice that you cannot change the name of the virtual appliance or the virtual IP address of the load balancer. You must redeploy the appliance and restore the backed-up VMs or files with the same IP addresses and host names.
If you are redeploying, reconfiguring, or adding virtual appliances to a cluster, see the Installation and Configuration documentation for the vRealize Automation appliance in the vRealize Automation product documentation. | https://docs.vmware.com/en/vRealize-Suite/2019/vrealize-suite-2019-backup-and-restore-avamar/GUID-11E5A449-30C1-469D-A6B8-152225390C97.html | 2022-06-25T04:28:03 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.vmware.com |
In the case that some spam or inappropriate comment makes its way
through you interface, you can (sometimes) remove the offending commit
XYZ.
From Mercurial, The Definitive Guide:
“Mercurial also does not provide a way to make a file or
changeset completely disappear from history, because there is no
way to enforce its disappearance”
Note that all of these change the repo history , so only do this on
your interface-specific repo before it interacts with any other repo.
Otherwise, you’ll have to survive by cherry-picking only the good
commits.
Hacking BE
libbe
Enter search terms or a module, class or function name. | http://docs.bugseverywhere.org/1.1.0/spam/ | 2022-06-25T05:19:03 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.bugseverywhere.org |
If you’re looking for a top-notch talking to firm, get come towards the right place. The Munich-based RETURN Management Consulting ranks 1st in Executive & Creation services as per to a latest study. This beat out overseas giants and native players to achieve this standing. Learn more about the firm and why they have received these kinds of high scratches. Continue reading to learn more about ROI managing consulting. Here are some of it is best practices and notable accomplishments.
The 1st step in proving RETURN for supervision consulting is always to define what ROI is usually and how the firm can easily deliver upon it. This involves defining the value of the actual engagement and communicating that value to key stakeholders. Once the worth is defined, it should be revisited frequently. That way, the adviser can focus on what they can do to improve the organization’s RETURN ON INVESTMENT. The next step is to make certain the ROI is a good a person.
Once a organization has selected the objectives for RETURN management consulting, they must cautiously evaluate and assess the hazards of purchasing the project. Although ROI assists assess type 1 and type 2, it is quiet on the third type. That is because ROI takes on an investment, with no investment is risk-free. Because of this , determining the ROI of an project is crucial. Identify multiple objectives with respect to the RETURN of your investment and the job. This way, you may make sure the consultant you select is the best choice to your organization. | https://docs.jagoanhosting.com/roi-management-consulting/ | 2022-06-25T05:27:45 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.jagoanhosting.com |
HI Team,
We are in the need to migrating SharePoint 2016 from one domain to another domain. We have created a new environment (in the new domain) and attached the databases. Now , we would need to migrate the user and we are confused whether we would need to user move-spuser or SPFarm.MigrateUserAccount Method as we are targeting to move the user at the farm level. | https://docs.microsoft.com/en-us/answers/questions/799100/help-needed-in-migrating-user-from-sharepoint-2016.html | 2022-06-25T05:34:33 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.microsoft.com |
You can automate the deployment of the Umbrella roaming client through the use of remote monitoring and management (RMM) tools. This means that you do not manually download and install the Umbrella roaming client on each machine.
Tip
Sample RMM scripts are available that you can use to deploy the Cisco Umbrella roaming client to your customers’ client workstations. If your RMM is not listed, contact your Account Manager for assistance. The Umbrella roaming client software is only supported on client operating systems. Do not install the software on server-class operating systems.
We have prebuilt scripts for several RMM tools, which are available for download from our Deploy-Scripts GitHub page. We also have documentation for the following RMM tools:
The above documentation is specific to MSPs and as such procedures differ slightly for MSSPs. The successful use of an RMM tool requires deployment parameters that are unique to each customer's Umbrella dashboard. For MSSPs, deployment parameters are only available in the downloadable roaming client package, which is downloaded from the customer's Umbrella dashboard. Deployment parameters are not listed in the MSSP console.
Note:.
- Find the Umbrella roaming client deploy script and run it for a small subset of representative computers before running it on all customer computers.
Acquire Deployment Parameters
Deployment parameters are available from the customer's Umbrella dashboard through the roaming client package, which includes a JSON file listing deployment parameters.
- In the MSSP console, navigate to Customer Management and click View Dashboard.
The customer's Umbrella dashboard opens in a new browser tab.
- In Umbrella, navigate to Deployments > Core Identities > Roaming Computers and click Roaming Client.
- Click Download.
- In the downloaded package, locate the JSON file, and open it in a web browser.
Deployment parameters are listed.
Note: The incorrect provisioning of devices will require that you uninstall and reinstall devices. Roaming Client Operation
Updated 2 years ago | https://docs.umbrella.com/mssp-deployment/docs/automated-deployment | 2022-06-25T04:33:37 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.umbrella.com |
Remove an InfluxDB stack
This page documents an earlier version of InfluxDB. InfluxDB v2.3 is the latest stable version. View this page in the v2.3 documentation.
Use the
influx stacks remove command
to remove an InfluxDB stack and all its associated resources.
Provide the following:
- Organization name or ID
- Stack ID
# Syntax influx stacks remove -o <org-name> --stack-id=<stack-id> # Example influx stacks remove \ -o example-org \ --stack-id=12ab34cd56. | https://test2.docs.influxdata.com/influxdb/v2.0/influxdb-templates/stacks/remove/ | 2022-06-25T04:00:51 | CC-MAIN-2022-27 | 1656103034170.1 | [] | test2.docs.influxdata.com |
How to upload OPML file in Feeds Section
This article explains how to import OPML files in Feeds to save time. Use the OPML format to upload RSS feed URLs in bulk.
Navigate to the Feeds section by clicking on Discovery->Content Feed->Feeds in the top navigation bar.
Select the group from the left-hand menu that you want to add the OPML file to
Click on "Import OPML" Button and then:
This will open up a dialogue box from your computer choose the file to upload.
You will receive a notification when the upload is successful and your feeds will appear approximately in 10 minutes in your Feeds sections.
Override OPML Option
You can also either override the existing OPML folder structure with this option or create a new group in which the feeds will be added. This option is provided for better organizing your feeds.
View Import History
You can view the Import history of the OPML files by clicking on "View Import History"
View Logs
You can view which of the feeds failed and which of them are added into your feeds by going to the View Logs.
This feature is developed to make it easier for you to manage and consume a large amount of content. | https://docs.contentstudio.io/article/618-how-to-upload-opml-file | 2022-06-25T04:29:56 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/60c84c35fa6e7d669e9d6174/file-0DMlKCveso.png',
None], dtype=object) ] | docs.contentstudio.io |
What Do Different Types of Influencers in Filter mean?
As you click on different types of feature, you will see that you can differentiate between the types of influencers:
Here's where the difference lies
Blogger: Influencers with active blogs
Company: Influential corporations such as media channels, news, informative or science-based companies. (youtube, CNN, or NASA)
Journalist: People who professionally write or report for news channels, papers, and websites.
Regular People: Normal people who have developed and built influential social accounts. | https://docs.contentstudio.io/article/661-what-do-different-types-of-influencers-in-filter-mean | 2022-06-25T04:32:51 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/5ed4c8a704286306f8046e67/file-hg4i80EwuM.png',
None], dtype=object) ] | docs.contentstudio.io |
AWS Code Suite
Create an AWS account
Creating your environment
Snyk Security
Conclusion
Cleanup
AWS Quick start guide
Snyk Integrations
GitBook
Conclusion
Congratulations. You have completed this DevSecOps workshop on shifting security testing left.
Recap on what you have learned
Learned how to deploy CloudFormation stacks
Learned about a modern CI/CD pipeline
Learned how to test in AWS CodeBuild
Learned about a couple of open source tools for security testing
Learned why it is important to test as early as possible in the pipeline
Final Thoughts
Due to the time and scope of the workshop there are several things that can and should be instrumented to improve security testing within the CI/CD pipeline. Here are a few suggestions to go above and beyond what you have learned in this workshop.
Add notifications that provide feedback to developers using technology that developers are already familiar with and using. For build failures send to a slack channel, SMS, or email notifications using Amazon Simple Notification Service (SNS) and AWS Lambda.
Use a branching method such as gitflow and test branches awaiting a pull request review.
Utilize
blue/green deployments
to instrument additional security testing prior to production deployment.
Enable git hooks to automate testing right when a developer commits code on her/his local machine.
Add additional testing such as language specific linters,
SAST
, DAST, dependency CVE scanning, IAST, and RASP. Implementation should be similiar to what we accomplished in this workshop.
The sky's the limit on adding additional features and functionality to
DevSecOps
. The point is to monitor your pipeline and continually make improvements to accelerate the release of features and functionality to your end customers.
Next Steps
Try and implement some of the learnings from the workshop on your company's development process. Don't try and do too much at one time and use an agile iterative approach. Remember that you are also trying to change culture by baking in security so don't try and do too much too fast.
Viewing Reporting
Cleanup
Last modified
8d ago
Export as PDF
Copy link
Edit on GitHub
Contents
Recap on what you have learned
Final Thoughts
Next Steps | https://docs.snyk.io/more-info/getting-started/aws-integrations/amazon-web-services/aws-code-suite/conclusion | 2022-06-25T05:26:47 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.snyk.io |
ITIL® 4 Strategist: Direct, Plan, and Improve
The ITIL® 4 Strategist course is designed to teach you the skills needed to create a ‘learning and improving’ IT organization with a reliable and effective strategic direction. The course will help you understand the impact of Agile and Lean work processes on an organization's advantage, which are key drivers of business transformation in today's market. This training provides participants with practical skills, principles, approaches, tools, and techniques for applying their knowledge in day-to-day operations.
Skills Covered
- Operating Model
- Strategy tactics operations
- Governance compliance mgt
- Risk management in DPI
- Optimization of workflow
- Value stream mapping
Why ITIL® 4 Strategist: Direct, Plan, and Improve is Important?
ITIL® 4 Strategist: Direct, Plan, and Improve is a detailed course that will take you through IT Service Management. This course covers the concepts of strategy, service design, service transition, and continual improvement. This training provides guidance on how to improve an organization's efficiency in delivering services to its customers with maximum effectiveness. The strategies learned in this course are essential for any manager or professional who wishes to better their skillset in ITIL®4.
How ITIL® 4 Strategist: Direct, Plan and Improve will help you?ITIL® 4 Strategist: Direct, Plan, and Improve is the latest ITIL release that will help you to do all three of these things. First direct by helping you to create a strategic vision for your organization, then plan by assisting with developing an actionable strategy, and finally improve through its continuous improvement process.
| https://iso-docs.com/products/itil4-strategist-direct-plan-and-improve | 2022-06-25T03:57:18 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/ITIL4startegist_1445x.png?v=1643054110',
'ITIL4 Strategist: Direct, Plan, and Improve,ITIL4'], dtype=object) ] | iso-docs.com |
Create user accounts
Appropriate roles: Account admin | Global admin | User management admin
Create user accounts for employees who need access to Partner Center. These tasks must be done by the user management admin, accounts admin, or the global admin. The user performing these tasks must also be assigned the Azure Active Directory (AD) roles of User administrator or Global administrator. For more information about Azure AD roles, see Administrator role permissions in Azure Active Directory.
Add a new user
From the Settings icon at the top right of the Partner Center, select Account settings and then select User management.
Select Add user.
Enter the user's full name and unique email address.
Select the type of agent and/or the type of admin you want to assign to the user. Partner Center access is role-based, so you can assign permissions to customize the user's view to show only the features the user needs to complete specific tasks. If users want a role assignment, they can find global admins to contact by going to User management and filtering on global admin.
Select Add to create the user account. Confirm the user's details on the next page.
Important
Make a note of the new user's sign-in information displayed on this page. Be sure to copy and send this information to the new user as you will not be able to access it again later.
The user will need to sign in to the Partner Center with their user name and temporary password. When the user signs in to the Partner Center for the first time, they are prompted to change their password.
Assign user roles
To work in the Partner Center, you must have an assigned role. Currently, roles include Azure Active Directory tenant roles, Cloud Solution Provider (CSP) roles, and non-AAD company roles. An individual company can have a need for all of these roles.
Important
Individuals must be listed in your tenant to access Partner Center. Role assignments provide additional access. | https://docs.microsoft.com/en-us/partner-center/create-user-accounts-and-set-permissions | 2022-06-25T05:52:48 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.microsoft.com |
The SoS utility is a command-line tool that you can use to run health checks, collect logs for VMware Cloud Foundation components, and so on.
To run the SoS utility, SSH in to the SDDC Manager appliance using the vcf user account. For basic operations, enter the following command:
sudo /opt/vmware/sddc-support/sos --option-1 --option-2 --option-3 ... --option-n
To list the available command options, use the
--help long option or the
-h short option.
sudo /opt/vmware/sddc-support/sos --help sudo /opt/vmware/sddc-support/sos -h
Note: You can specify options in the conventional GNU/POSIX syntax, using
--for the long option and
-for the short option.
For privileged operations, enter su to switch to the root user, and navigate to the /opt/vmware/sddc-support directory and type
./sos followed by the options required for your desired operation. | https://docs.vmware.com/en/VMware-Cloud-Foundation/4.4/vcf-admin/GUID-8B3E36D5-E98B-47CF-852A-8C96F406D6E1.html | 2022-06-25T05:11:17 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.vmware.com |
, configure these settings, and click Next.
- On the Select product page, select the check box for vRealize Operations, configure the settings, and click Next.
- On the End user license agreement page, read the EULA, select the I agree to the terms and conditions check box, and click Next.
- On the License page, select or manually add the vRealize Suite license.
Click Select, select the vRealize Suite or vCloud Suite license alias and click Update.
To add the license manually, click Add, enter the vRealize Suite or vCloud Suite license alias and key, click Validate, and click Add.
- Validate the license by clicking Validate association and click Next.
- On the Certificate page, from the Select certificate drop-down menu, select the vRealize Operations Manager certificate and click Next.
- On the Infrastructure page, verify these values and click Next.
- On the Network page, verify these values and click Next.
Results
You are redirected to the Products page of the Create environment wizard to deploy vRealize Operation Manager. | https://docs.vmware.com/en/VMware-Validated-Design/6.2/sddc-deployment-of-cloud-operations-and-automation/GUID-00B04468-BA90-4769-9F14-D50902C7BA29.html | 2022-06-25T04:02:47 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.vmware.com |
Using LDAP Authentication
Metacloud administrators manage project role authorization using the Identity service, while providing user authentication through the LDAP directory. The Identity service stores user credentials in a SQL Database and a Lightweight Directory Access Protocol (LDAP)-compliant directory server. LDAP simplifies integration of Identity authentication with an organization’s existing directory service, like Active Directory (AD), and user account management process. Authentication requests received by the Identity service delegate to the LDAP system.
The Identity v3 service allows for multiple domains. This means that a domain could have a different authentication back end. A domain contains information for user roles, groups, and group member lists. Administrators integrate LDAP by mapping the organizational unit in the LDAP directory to a role or a group of users in the Identity service domain. A successful authentication generates a token used for accessing authorized services available to your group or role.
To register an LDAP-based account:
If your Metacloud contains a domain configured to authenticate using an LDAP-compliant directory service, use the Register User dialog box to register your account for Metacloud access.
- In the Dashboard Log In page, select First time user? Register Here.
Provide your Active Directory user name, email address, and password.
Note
The Register User form requires your full email address.
- Once you register, you can use your LDAP user name and password to log in to Metacloud.
Password Maintenance
When using LDAP, do not change your password using the Metacloud Dashboard. Change your password according to the policies of your organization and then use your new password to log in to Metacloud. If you have any concerns regarding a Dashboard or CLI login failure, contact your Metacloud administrator. | http://docs.metacloud.com/latest/user-guide/identity-ldap-auth/ | 2018-02-17T21:08:36 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.metacloud.com |
Federation Gateway¶
A common architecture is the so-called federation gateway. In this approach IdentityServer acts as a gateway to one or more external identity providers.
This architecture has the following advantages
- your applications only need to know about the one token service (the gateway) and are shielded from all the details about connecting to the external provider(s). This also means that you can add or change those external providers without needing to update your applications.
- you control the gateway (as opposed to some external service provider) - this means you can make any changes to it and can protect your applications from changes those external providers might do to their own services.
- most external providers only support a fixed set of claims and claim types - having a gateway in the middle allows post-processing the response from the providers to transform/add/ammend domain specific identity information.
- some providers don’t support access tokens (e.g. social providers) - since the gateway knows about your APIs, it can issue access tokens based on the external identities.
- some providers charge by the number of applications you connect to them. The gateway acts as a single application to the external provider. Internally you can connect as many applications as you want.
- some providers use proprietary protocols or made proprietary modifications to standard protocols - with a gateway there is only one place you need to deal with that.
- forcing every authentication (internal or external) through one single place gives you tremendous flexibility with regards to identity mapping, providing a stable identity to all your applications and dealing with new requirements
In other words - owning your federation gateway gives you a lot of control over your identity infrastructure. And since the identity of your users is one of your most important assets, we recommend taking control over the gateway.
Implementation¶
Our quick start UI utilizes some of the below features. Also check out the external authentication quickstart and the docs about external providers.
- You can add support for external identity providers by adding authentication handlers to your IdentityServer application.
- You can programmatically query those external providers by calling
IAuthenticationSchemeProvider. This allows to dynamically render your login page based on the registered external providers.
- Our client configuration model allows restricting the available providers on a per client basis (use the
IdentityProviderRestrictionsproperty).
- You can also use the
EnableLocalLoginproperty on the client to tell your UI whether the username/password input should be rendered.
- Our quickstart UI funnels all external authentication calls through a single callback (see
ExternalLoginCallbackon the
AccountControllerclass). This allows for a single point for post-processing. | http://docs.identityserver.io/en/release/topics/federation_gateway.html | 2018-02-17T21:00:15 | CC-MAIN-2018-09 | 1518891807825.38 | [array(['../_images/federation_gateway.png',
'../_images/federation_gateway.png'], dtype=object)] | docs.identityserver.io |
.
/* <integer> values */ order: 5; order: -5; /* Global values */ order: inherit; order: initial; order: unset;
Note:
order is only meant to affect the visual order of elements and not their logical or tab order.
order must not be used on non-visual media such as speech.
<integer>
0.
<integer>.
© 2005–2018 Mozilla Developer Network and individual contributors.
Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later. | http://docs.w3cub.com/css/order/ | 2018-02-17T21:26:13 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.w3cub.com |
Using Markers to Enable Context-Sensitive Help in WebWorks Help
To enable context-sensitive help links in WebWorks Help, you need to enable the TopicAlias marker. By default, ePublisher sets the Marker type option for a marker named TopicAlias to Topic alias. You can create a marker with a different name and set the Marker type option for that marker to Topic alias.
Then, writers can use this marker in the source documents to define a topic ID in each topic that will be opened by the application. Topic IDs must follow these guidelines:
Must be unique
Must begin with an alphabetical character
May contain alphanumeric characters
May not contain special characters or spaces, with the exception of underscores (_)
To assign topic alias behavior to topic alias markers
Open your Stationery design project.
On the View menu, click Style Designer.
In Marker Styles, select the marker style you want to modify.
On the Options tab, set Marker type to Topic alias. | http://docs.webworks.com/ePublisher/2008.3/Help/Designing_Templates_and_Stationery/Customizing_Stationery.4.20 | 2018-02-17T21:53:28 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.webworks.com |
Adobe FrameMaker
Adobe FrameMaker provides a comprehensive publishing solution with XML-based structured authoring. You can develop the templates you need to deliver polished technical documentation for large product libraries. FrameMaker allows you to create both structured and unstructured content. You can also create DITA-compliant content. FrameMaker provides a good solution for the following conditions:
Long source documents
Many images included in your source documents
Multiple page layouts used throughout your source documents
Conditions needed to deliver multiple versions of your source documents
Multiple files to allow multiple writers to work simultaneously on the content
Comprehensive format controls, such as keep with previous paragraph
For more information about using FrameMaker and establishing single-sourcing standards with FrameMaker, see “Designing Adobe FrameMaker Formats and Standards” on page 71. | http://docs.webworks.com/ePublisher/2009.1/Help/02.Designing_Templates_and_Stationery/01.008.Selecting_Formats | 2018-02-17T21:52:48 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.webworks.com |
Deliver Full-Featured, Context-Sensitive Help Systems
Products need to provide comprehensive help systems that meet the needs of many potential audiences. Content design and delivery must ensure that users get the information they need when, where, and how they need it. Some products need to deliver different content to different audiences. Other products are sold by multiple companies and require distinct product branding.
ePublisher provides comprehensive support for many advanced features used in online content design and delivery, including the following elements:
Customizable browse navigation and breadcrumbs
Customizable table of contents and mini-TOCs
Pop-ups and expandable/collapsible text sections
Related topics
Images, image maps, and multiple forms of multimedia
Context-sensitive help topics
Merged help systems (multi-volume help)
Variables and conditions
Accessibility features, such as alternate text and long descriptions
Field-level help | http://docs.webworks.com/ePublisher/2010.1/Help/01.Welcome_to_ePublisher/4.15.Introduction | 2018-02-17T21:54:28 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.webworks.com |
Setting the Font for a Character
Setting fonts for online output is an important step in making sure your content is properly displayed for your audience. Because many browsers and help systems use only the fonts available on the user’s computer, you may not be able to use specific fonts, such as Times New Roman, as some computers may not have those fonts installed. You can specify a font family, such as sans-serif, to ensure a font of a similar type is used on each computer. You can also specify multiple fonts, separated by commas, to allow the browser to display the first available font.
To set the font of a character
Open your Stationery design project.
On the View menu, click Style Designer.
In Character Styles, select the character style you want to modify.
On the Properties tab, click Font.
Specify the family, size, style, and other properties you want to modify. For more information about a property, click Help. | http://docs.webworks.com/ePublisher/2010.2/Help/02.Designing_Templates_and_Stationery/3.047.Designing_Stationery | 2018-02-17T21:53:48 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.webworks.com |
What is the Confluent Platform?¶
The Confluent Platform is a stream data platform that enables you to organize and manage the massive amounts of data that arrive every second at the doorstep of a wide array of modern organizations in various industries, from retail, logistics, manufacturing, and financial services, to online social networking. With Confluent, this growing barrage of, often unstructured but nevertheless incredibly valuable, data becomes an easily accessible, unified stream data platform that’s always readily available for many uses throughout your entire organization. These uses can easily range from enabling batch Big Data analysis with Hadoop and feeding realtime monitoring systems, to more traditional large volume data integration tasks that require a high-throughput, industrial-strength extraction, transformation, and load (ETL) backbone.
What is included in the Confluent Platform?¶
The Confluent Platform is a collection of infrastructure services, tools, and guidelines for making all of your company’s data readily available as realtime streams. By integrating data from disparate IT systems into a single central stream data platform or “nervous system” for your company, the Confluent Platform lets you focus on how to derive business value from your data rather than worrying about the underlying mechanics of how data is shuttled, shuffled, switched, and sorted between various systems.
At its core, the Confluent Platform leverages Apache Kafka, a proven open source technology created by the founders of Confluent while at LinkedIn. Kafka acts as a realtime, fault tolerant, highly scalable messaging system and is already widely deployed for use cases ranging from collecting user activity data, system logs, application metrics, stock ticker data, and device instrumentation signals. Its key strength is its ability to make high volume data available as a realtime stream for consumption in systems with very different requirements – from batch systems like Hadoop, to realtime systems that require low-latency access, to stream processing engines that transform data streams immediately, as they arrive.
Out of the box, the Confluent Platform also includes a Schema Registry, a REST Proxy, and integration with Camus, a MapReduce implementation that dramatically eases continuous upload of data into Hadoop clusters. The capabilities of these tools are discussed in more detail in the following sections. Collectively, the integrated components in the Confluent Platform give your team a simple and clear path towards establishing a consistent yet flexible approach for building an enterprise-wide stream data platform for a wide array of use cases.
These Guides, Quickstarts, and API References help you get started easily, describe best practices both for the deployment and management of Kafka, and show you how to use the Confluent Platform tools to get the most out of your Kafka deployment - with the least amount of risk and hassle.
Apache Kafka¶
Apache Kafka is a realtime, fault tolerant, highly scalable messaging system. It is widely adopted for many use cases ranging from collecting user activity data, logs, application metrics, stock ticker data, and device instrumentation. Kafka’s unifying abstraction, a partitioned and replicated low latency commit log, allows these applications with very different throughput and latency requirements to be implemented on a single messaging system. It also encourages clean, loosely coupled architectures by acting as a highly reliable mediator between systems.
Kafka is a powerful and flexible tool that forms the foundation of the Confluent Platform. However, it is not a complete stream data platform: like a database, it provides the data storage and interfaces for reading and writing that data, but does not directly help you integrate with other services.
Confluent’s Commitment to Open Source¶
Confluent is committed to maintaining, enhancing, and supporting the open source Apache Kafka project. As the documentation here discusses, the version of Kafka in the Confluent Platform contains patches over the matching open source version but is fully compatible with the matching open source version. An existing Kafka cluster can be upgraded easily by performing a rolling restart of Kafka brokers.
Kafka Connect¶
Kafka Connect is a tool for scalably and reliably streaming data between Apache Kafka and other data systems. It makes it simple to quickly define connectors that move large data sets into and out..
Kafka Connect JDBC Connector¶
The JDBC connector allows you to import data from any relational database with a JDBC driver into Kafka topics. By using JDBC, this connector can support a wide variety of databases without requiring custom code for each one.
Kafka Connect HDFS Connector¶
The HDFS connector allows you to export data from Kafka topics to HDFS files in a variety of formats and integrates with Hive to make data immediately available for querying with HiveQL.
C/C++ library: librdkafka¶. But as requirements change, it becomes necessary to evolve these formats. With only an ad-hoc definition, it is very difficult for developers to determine what the impact of their change might be.
The.
REST Proxy¶
Every organization standardizes on their own set of tools, and many use languages that do not have high quality Kafka clients. Only a couple of languages have very good client support because writing high performance Kafka clients is very challenging compared to clients for other systems because of its very general, flexible pub-sub model.
The.
Camus¶
Camus is a MapReduce job that provides automatic, zero data-loss ETL from Kafka into HDFS. By running Camus periodically, you can be sure that all the data that was stored in Kafka has also been delivered to your data warehouse in a convenient time-partitioned format and will be ready for offline batch processing.
Confluent Platform’s Camus is also integrated with the Schema Registry. With this integration, Camus automatically decodes the data before storing it in HDFS and ensures it is in a consistent format for each time partition, even if the data contains data using different schemas. By integrating the Schema registry at every step from data creation to delivery into the data wharehouse, you can avoid the expensive, labor-intensive pre-processing often required to get your data into a usable state.
Note
By combining the tools included in the Confluent Platform, you get a fully-automated, low-latency, end-to-end ETL pipeline with online schema evolution. be upgraded easily by performing a rolling restart of Kafka brokers.. Finally, run periodic Camus jobs to automatically load data from Kafka into HDFS. | https://docs.confluent.io/2.0.0/platform.html | 2018-02-17T21:26:10 | CC-MAIN-2018-09 | 1518891807825.38 | [array(['_images/confluent-platform.png', '_images/confluent-platform.png'],
dtype=object) ] | docs.confluent.io |
#include <v8-profiler.h>
Interface for providing information about embedder's objects held by global handles. This information is reported in two ways:
Thus, if an embedder wants to provide information about native objects for heap snapshots, he can do it in a GC prologue handler, and / or by assigning wrapper class ids in the following way:
V8 takes ownership of RetainedObjectInfo instances passed to it and keeps them alive only during snapshot collection. Afterwards, they are freed by calling the Dispose class function.
Definition at line 580 of file v8-profiler.h.
Definition at line 621 of file v8-profiler.h.
Definition at line 622 of file v8-profiler.h.
Returns element count in case if a global handle retains a subgraph by holding one of its nodes.
Definition at line 615 of file v8-profiler.h.
Returns human-readable group label. It must be a null-terminated UTF-8 encoded string. V8 copies its contents during a call to GetGroupLabel. Heap snapshot generator will collect all the group names, create top level entries with these names and attach the objects to the corresponding top level group objects. There is a default implementation which is required because embedders don't have their own implementation yet.
Definition at line 609 of file v8-profiler.h.
References RetainedObjectInfo::GetLabel().
Returns hash value for the instance. Equivalent instances must have the same hash value.
Returns human-readable label. It must be a null-terminated UTF-8 encoded string. V8 copies its contents during a call to GetLabel.
Referenced by RetainedObjectInfo::GetGroupLabel().
Returns embedder's object size in bytes.
Definition at line 618 of file v8-profiler.h.
Returns whether two instances are equivalent. | https://v8docs.nodesource.com/io.js-3.3/d0/dd3/classv8_1_1_retained_object_info.html | 2018-02-17T21:40:22 | CC-MAIN-2018-09 | 1518891807825.38 | [] | v8docs.nodesource.com |
Quick Start A collection of all services for all data types is available in the API endpoint reference . A JSON representation of available REST endpoints can also be retreived at To get an idea of the NBA services’ possibilities and the available data, you can also have a look at the Bioportal. Basic (human readable) queries The base URL for querying the current version (v2) of the NBA is This ‘home’ screen lists some information including the build date and version. The data types in the NBA are: specimen, taxon, multimedia, geo, and metadata. They are accessed as path variables and queried via the query endpoint, for example: Query parameters Simple queries for specific fields can be queried using standard URL query parameters, for example the parameter collectionType can be queried to get all specimens from the Mammalia collection: An overview of all fields in a data type and whether you can query them, can be found at /v2/{doctype}/metadata/getFieldInfo, for example: Query parameters can be combined with an & to match multiple terms. Suppose we want to match the specimens from collectionType Mammalia that are female: Result counts When using the query endpoint, the first field in the JSON response is the amount of results found. It is also possible to retrieve the counts directly. The count endpoint can take exactly the same query parameters as query and, instead of a JSON string, returns an integer number. Example: Objects and Paths The fields available in a query directly map to the object structure used to model the four available data types. The objects are nested, so fields can contain subfields. For example, a specimen has (among others) the field gatheringEvent with multiple subfields. Below, for example, we see an excerpt of the JSON representation of a specimen: "id": "ZMA.INS.800488@CRS", "gatheringEvent": { "country": "Cabo Verde", "provinceState": "Santiago", "locality": "São Jorge dos Órgãos", "localityText": "CABO VERDE, Santiago, S. Jorge dos Orgaos", "dateTimeBegin": "1990-05-01T00:00:00.000+0000", "gatheringPersons": [ { "fullName": "A. van Harten" } ] } Here, id can be queried directly (id=ZMA.INS.800488@CRS). Subfields, e.g. the country of gathering or the person who collected the specimen can then be queried with gatheringEvent.country and gatheringEvent.gatheringPersons.fullName. Thus, fields and subfields are separated by a ‘.’. Note that gatheringEvent itself cannot be queried directly, since it is an object with subfields and not a simple data type. | http://docs.biodiversitydata.nl/en/latest/quickstart/ | 2018-02-17T21:25:53 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.biodiversitydata.nl |
Preparing to back up workloads using Azure Backup Server
This article explains how to prepare your environment to back up workloads using Azure Backup Server. With Azure Backup Server, you can protect application workloads such as Hyper-V VMs, Microsoft SQL Server, SharePoint Server, Microsoft Exchange, and Windows clients from a single console.
Note
Azure Backup Server can now protect VMware VMs and provides improved security capabilities. Install the product as explained in the sections below; apply Update 1 and the latest Azure Backup Agent. To learn more about backing up VMware servers with Azure Backup Server, see the article, Use Azure Backup Server to back up a VMware server. To learn about security capabilities, refer to Azure backup security features documentation.
You can also protect Infrastructure as a Service (IaaS) workloads such as VMs in Azure.
Note
Azure has two deployment models for creating and working with resources: Resource Manager and classic. This article provides the information and procedures for restoring VMs deployed using the Resource Manager model.
Azure Backup Server inherits much of the workload backup functionality from Data Protection Manager (DPM). This article links to DPM documentation to explain some of the shared functionality. Though Azure Backup Server shares much of the same functionality as DPM. Azure Backup Server does not back up to tape, nor does it integrate with System Center.
1. Choose an installation platform
The first step towards getting the Azure Backup Server up and running is to set up a Windows Server. Your server can be in Azure or on-premises.
Using a server in Azure
When choosing a server for running Azure Backup Server, it is recommended you start with a gallery image of Windows Server 2012 R2 Datacenter. The article, Create your first Windows virtual machine in the Azure portal, provides a tutorial for getting started with the recommended virtual machine in Azure, even if you've never used Azure before. The recommended minimum requirements for the server virtual machine (VM) should be: A2 Standard with two cores and 3.5 GB RAM.
Protecting workloads with Azure Backup Server has many nuances. The article, Install DPM as an Azure virtual machine, helps explain these nuances. Before deploying the machine, read this article completely.
Using an on-premises server
If you do not want to run the base server in Azure, you can run the server on a Hyper-V VM, a VMware VM, or a physical host. The recommended minimum requirements for the server hardware are two cores and 4 GB RAM. The supported operating systems are listed in the following table:
You can deduplicate the DPM storage using Windows Server Deduplication. Learn more about how DPM and deduplication work together when deployed in Hyper-V VMs.
Note
Azure Backup Server is designed to run on a dedicated, single-purpose server. You cannot install Azure Backup Server on:
- A computer running as a domain controller
- A computer on which the Application Server role is installed
- A computer that is a System Center Operations Manager management server
- A computer on which Exchange Server is running
- A computer that is a node of a cluster
Always join Azure Backup Server to a domain. If you plan to move the server to a different domain, it is recommended that you join the server to the new domain before installing Azure Backup Server. Moving an existing Azure Backup Server machine to a new domain after deployment is not supported.
2. Recovery Services vault
Whether you send backup data to Azure or keep it locally, the software needs to be connected to Azure. To be more specific, the Azure Backup Server machine needs to be registered with a recovery services vault.
To create a recovery services vault:
On the Hub menu, click Browse and in the list of resources, type Recovery Services. As you begin typing, the list filters based on your input. Click Recovery Services vault.
The list of Recovery Services vaults is displayed.
On the Recovery Services vaults menu, click Add.
The Recovery Services vault blade opens, prompting you to provide a Name, Subscription, Resource group, and Location.
- For Name, enter a friendly name to identify the vault. The name needs to be unique for the Azure subscription. Type a name that contains between 2 and 50 characters. It must start with a letter, and can contain only letters, numbers, and hyphens.
- Click Subscription to see the available list of subscriptions. If you are not sure which subscription to use, use the default (or suggested) subscription. There are multiple choices only if your organizational account is associated with multiple Azure subscriptions.
- Click Resource group to see the available list of Resource groups, or click New to create a new Resource group. For complete information on Resource groups, see Azure Resource Manager overview
- Click Location to select the geographic region for the vault.
- Click Create. It can take a while for the Recovery Services vault to be created. Monitor the status notifications in the upper right-hand area in the portal. Once your vault is created, it opens in the portal.
Set Storage Replication
The storage replication option allows you to choose between geo-redundant storage and locally redundant storage. By default, your vault has geo-redundant storage. If this vault is your primary vault, leave the storage option set to geo-redundant storage. Choose locally redundant storage if you want a cheaper option that isn't quite as durable. Read more about geo-redundant and locally redundant storage options in the Azure Storage replication overview.
To edit the storage replication setting:
- Select your vault to open the vault dashboard and the Settings blade. If the Settings blade doesn't open, click All settings in the vault dashboard.
On the Settings blade, click Backup Infrastructure > Backup Configuration to open the Backup Configuration blade. On the Backup Configuration blade, choose the storage replication option for your vault.
After choosing the storage option for your vault, you are ready to associate the VM with the vault. To begin the association, you should discover and register the Azure virtual machines.
3. Software package
Downloading the software package
If you already have a Recovery Services vault open, proceed to step 3. If you do not have a Recovery Services vault open, but are in the Azure portal, on the Hub menu, click Browse.
- In the list of resources, type Recovery Services.
As you begin typing, the list will filter based on your input. When you see Recovery Services vaults, click it.
The list of Recovery Services vaults appears.
From the list of Recovery Services vaults, select a vault.
The selected vault dashboard opens.
The Settings blade opens up by default. If it is closed, click on Settings to open the settings blade.
Click Backup to open the Getting Started wizard.
In the Getting Started with backup blade that opens, Backup Goals will be auto-selected.
In the Backup Goal blade, from the Where is your workload running menu, select On-premises.
From the What do you want to backup? drop-down menu, select the workloads you want to protect using Azure Backup Server, and then click OK.
The Getting Started with backup wizard switches the Prepare infrastructure option to back up workloads to Azure.
Note
If you only want to back up files and folders, we recommend using the Azure Backup agent and following the guidance in the article, First look: back up files and folders. If you are going to protect more than files and folders, or you are planning to expand the protection needs in the future, select those workloads.
In the Prepare infrastructure blade that opens, click the Download links for Install Azure Backup Server and Download vault credentials. You use the vault credentials during registration of Azure Backup Server to the recovery services vault. The links take you to the Download Center where the software package can be downloaded.
Select all the files and click Next. Download all the files coming in from the Microsoft Azure Backup download page, and place all the files in the same folder.
Since the download size of all the files together is > 3G, on a 10Mbps download link it may take up to 60 minutes for the download to complete.
Extracting the software package.
Warning
At least 4GB of free space is required to extract the setup files.
Once the extraction process complete, check the box to launch the freshly extracted setup.exe to begin installing Microsoft Azure Backup Server and click on the Finish button.
Installing the software package
Click Microsoft Azure Backup to launch the setup wizard.
On the Welcome screen click the Next button. This takes you to the Prerequisite Checks section. On this screen, click Check to determine if the hardware and software prerequisites for Azure Backup Server have been met. If all prerequisites are met successfully, you will see a message indicating that the machine meets the requirements. Click on the Next button.. Once the prerequisites are successfully installed, click Next.
If a failure occurs with a recommendation to restart the machine, do so and click Check Again.
Note
Azure Backup Server will not work with a remote SQL Server instance. The instance being used by Azure Backup Server needs to be local.
Provide a location for the installation of Microsoft Azure Backup server files and click.
Provide a strong password for restricted local user accounts and click Next.
Select whether you want to use Microsoft Update to check for updates and click Next.
Note
We recommend having Windows Update redirect to Microsoft Update, which offers security and important updates for Windows and other products like Microsoft Azure Backup Server.
Review the Summary of Settings and click Install..
The next step is to configure the Microsoft Azure Recovery Services Agent. As a part of the configuration, you will have to provide your vault credentials to register the machine to the recovery services vault. You will also provide a passphrase to encrypt/decrypt the data sent between Azure and your premises. You can automatically generate a passphrase or provide your own minimum 16-character passphrase. Continue with the wizard until the agent has been configured.
Once registration of the Microsoft Azure Backup server successfully completes, the overall setup wizard proceeds to the installation and configuration of SQL Server and the Azure Backup Server components. Once the SQL Server component installation completes, the Azure Backup Server components are installed.
When the installation step has completed, the product's desktop icons will have been created as well. Just double-click the icon to launch the product.
Add backup storage
The first backup copy is kept on storage attached to the Azure Backup Server machine. For more information about adding disks, see Configure storage pools and disk storage.
Note
You need to add backup storage even if you plan to send data to Azure. In the current architecture of Azure Backup Server, the Azure Backup vault holds the second copy of the data while the local storage holds the first (and mandatory) backup copy.
4., else there is no connectivity.
At the same time, the Azure subscription needs to be in a healthy state. To find out the state of your subscription and to manage it, log in to the subscription portal.
Once you know the state of the Azure connectivity and of the Azure subscription, you can use the table below to find out the impact on the backup/restore functionality offered.
Recovering from loss of connectivity
If you have a firewall or a proxy that is preventing access to Azure, you need to whitelist the following domain addresses in the firewall/proxy profile:
-
- *.Microsoft.com
- *.WindowsAzure.com
- *.microsoftonline.com
- *.windows.net
Once connectivity to Azure has been restored to the Azure Backup Server machine, the operations that can be performed are determined by the Azure subscription state. The table above has details about the operations allowed once the machine is "Connected".
Handling subscription states
It is possible to take an Azure subscription from an Expired or Deprovisioned state to the Active state. However this has some implications on the product behavior while the state is not Active:
- A Deprovisioned subscription loses functionality for the period that it is deprovisioned. On turning Active, the product functionality of backup/restore is revived. The backup data on the local disk also can be retrieved if it was kept with a sufficiently large retention period. However, the backup data in Azure is irretrievably lost once the subscription enters the Deprovisioned state.
- An Expired subscription only loses functionality for until it has been made Active again. Any backups scheduled for the period that the subscription was Expired will not run.
Troubleshooting
If Microsoft Azure Backup server fails with errors during the setup phase (or backup or restore), refer to this error codes document for more information. You can also refer to Azure Backup related FAQs
Next steps
You can get detailed information about preparing your environment for DPM on the Microsoft TechNet site. It also contains information about supported configurations on which Azure Backup Server can be deployed and used.
You can use these articles to gain a deeper understanding of workload protection using Microsoft Azure Backup server. | https://docs.microsoft.com/en-us/azure/backup/backup-azure-microsoft-azure-backup | 2018-02-17T21:40:01 | CC-MAIN-2018-09 | 1518891807825.38 | [array(['media/backup-azure-microsoft-azure-backup/extract/03.png',
'Microsoft Azure Backup Setup Wizard'], dtype=object) ] | docs.microsoft.com |
Configuration
Visualizer for JIRA provides for custom configuration in addition to primary controls.
Basic configuration
Show all fields
Each configuration dropdown is automatically adjusted to only contain fields that are likely to yield useful visualizations. Typically it means omitting fields which are empty, have the same value for all issues, or have different values for each issue. If you would like to use a field that is not visible in a dropdown, check the Show all fields checkbox.
X/Y axis spacing
The X axis spacing and Y axis spacing can be used to add or remove white space between axes. This is great way to view a large number of issues in a small space. Alternatively it can help make clearer which items belong to which axis.
Multi-value fields
When your selected fields contain multiple values, choosing Combine will create a single card for each issue. Choosing Separate will create multiple issue cards for each value in the field, essentially duplicating the issue and displaying it at its appropriate intersections.
View presets
Default
A standard configuration of your data.
Social graph
Social graph view will visualize where your major clusters lie. This view is only available when no X or Y axis fields are selected, and shows relationships best with combined multi-value fields.
Vertical stack
Vertical stack view stacks your issues for easy comparison between X-axis values. This view is only available when an X-axis field is selected.
Advanced configuration
Visualizer is built on a popular physics emulator to produce a force-directed graph configurator, and as such, has a number of ways to adjust these physical properties. Tweaking these settings can lead to the discovery of different organically occurring patterns.
Grouping preference
The grouping preference control determines how linked issues in JIRA gravitate toward each other.
The farther left, the less the attraction.
By default grouping preference is weak, providing a visualization where items are primarily located at their intended coordinates with a slight hint of connection to other group members. Grouping preference is only active when Multi-value fields are combined.
Point charge
Dust off your physics notes. Force directed graphs work by naturally repelling items from each other. Point charge is how hard they push away. A higher point charge means more repulsion.
The father left, the less the repulsion. Completely left sets a point charge of zero. This setting is independent of collision detection.
By default point charge is weak, providing a trade-off between visual spacing and allowing items to group around their intended coordinates.
Collision detection start
Collision detection prevents cards from overlapping. Collision detection start determines how soon cards start “looking” for collisions on their way to their intended coordinates. If collision detection starts too early, the cards go wild and may not reach their ultimate destination, having given up after a number of attempts to move without colliding. If it starts too late, the cards may reach their destination, notice their overlap, and then explode outward trying to avoid collision after the fact.
If the slider is all the way to the left, collision detection is disabled, i.e. it waits so long that it never starts. The farther right, the sooner the cards will detect collisions on their way to their intended coordinates.
Collision detection start is slow by default to allow a higher probability of the titles reaching their intended coordinates, while still allowing them to undergo some collision detection.
Collision detection spread
Collision detection spread controls the strength of the repulsive force of collision detection, i.e. how vigorously a card will try to get away from its neighbor.
When set to the far left, cards will detect a collision, back up a little, and try again. To the far right, cards will detect a collision and fling themselves as far away as possible. This setting is independent of Point charge.
Collision detection spread is fairly strong by default to allow a high probability of the titles reaching their intended coordinates, while still ensuring no overlap.
Enable charge easing
When enabled, once a card reaches its destination, its point charge will reduce to zero. This establishes tight clustering around intersections. By enabling this setting and setting Collision detection start to zero, cards will overlap exactly. | http://docs.expium.com/visualizer-for-jira/configuration/ | 2018-02-17T21:10:26 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.expium.com |
IIS 6.0 F1: FTP Site Logging Properties - General Tab
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2, Windows Server 2008
Use this dialog box to specify how log files are created and saved. monthly, starting with the first entry that occurs after midnight of the last day of the month.
Note
"Midnight" is midnight local time. Midnight is used for all log file formats except W3C Extended format, which by default uses midnight Greenwich Mean Time, but can be specified as midnight local time (by selecting the Use local time for file naming and rollover option below). the directory in which log files should be saved, or click Browse to locate the directory.
Click to locate the directory to which log files will be saved.
Related Topics
To learn more about setting up an FTP site and logging site activity, see the IIS 6.0 online documentation on the Microsoft Windows Server TechCenter. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc771585(v=ws.10) | 2018-02-17T22:14:22 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.microsoft.com |
In this example, you create a vSphere Replication user that can view replication sites and replications configured between them, but cannot perform modifications.
Prerequisites
Verify that you have two sites connected and replication configured between them.
Verify that you have another user account for each site.
Procedure
- Log in as Administrator on the source site.
- Select VRM replication viewer role with the propagate option to this user. and assign the
- Assign the same privilege on the target replication site.
- Log in as the user with the assigned VRM replication viewer role.
Results
The user with the VRM replication viewer role cannot perform modifications on the configured replication, nor on the replication sites. The following error message appears when this user tries to run an operation: Permission to perform this operation was denied. | https://docs.vmware.com/en/vSphere-Replication/6.1/com.vmware.vsphere.replication-admin.doc/GUID-BFB92137-0C7D-4257-AB58-EE1ED7055453.html | 2018-02-17T21:40:18 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.vmware.com |
Revision history of "JUserHelper::getCryptedPassword/11.1"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 21:51, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JUserHelper::getCryptedPassword/11.1 to API17:JUserHelper::getCryptedPassword without leaving a redirect (Robot: Moved page) | https://docs.joomla.org/index.php?title=JUserHelper::getCryptedPassword/11.1&action=history | 2015-05-22T12:46:30 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.joomla.org |
All public logs
Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 08:14, 20 June 2013 Wilsonge (Talk | contribs) deleted page Subpackage User (content was: "{{Description:Subpackage User}}58 of page Subpackage User patrolled
- 19:27, 25 April 2011 Doxiki2 (Talk | contribs) automatically marked revision 51311 of page Subpackage User patrolled | https://docs.joomla.org/index.php?title=Special:Log&page=Subpackage+User | 2015-05-22T11:55:04 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.joomla.org |
Imply ships with a supervise command that manages service lifecycles and console logs. It>.log in the distribution. You can write these
files to any location you want by passing the
-d <directory> argument to
bin/supervise.
For added convenience, you can also tail log files by running
bin/service --tail <service>.
Log files are not automatically rotated.
foo.log.
When updating an Imply cluster, you should follow the typical procedure for a Druid Rolling Update. If you have deployed your cluster with the Master, Query, and Data server configuration, then take note of the following:
Please see the Druid operations documentation for tips on best practices, extension usage, monitoring suggestions, multitenancy information, performance optimization, and many more topics. | https://docs.imply.io/on-prem/deploy/operations | 2018-09-19T00:16:05 | CC-MAIN-2018-39 | 1537267155792.23 | [] | docs.imply.io |
The path and name, relative to the images directory, of the image to be displayed on the right of the weblet. If specified, the right_absolute_image_path property should be left blank.
Default value
Blank – by default, buttons do not display an image on the right.
Valid values
The path and name of an image, relative to the images directory, enclosed in single quotes. An image can be chosen from a prompter by clicking the corresponding ellipses button in the property sheet. | https://docs.lansa.com/14/en/lansa087/content/lansa/wamengb8_2165.htm | 2018-09-19T00:17:36 | CC-MAIN-2018-39 | 1537267155792.23 | [] | docs.lansa.com |
Setting Up the Node View
Now that your environment variables have been properly set up, the final step is to prepare your network to call the Autodesk Maya rendering utility. Once this is done, you can render the 3D objects in your Harmony project and preview them in the Camera view in Render View
mode. This will allow you to composite your 3D scene and effects.
There are two types of script. | https://docs.toonboom.com/help/harmony-12/premium/Content/_CORE/_Workflow/030_2D-3D_Integration/027_H2_Setting_Up_the_Node_View.html | 2018-09-18T23:41:21 | CC-MAIN-2018-39 | 1537267155792.23 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePremium.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageAdvanced.png',
'Toon Boom Harmony 12 Stage Advanced Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageEssentials.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/_ICONS/download.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/2D3D/HAR11/HAR11_2D3D_renderMayaBatch.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/2D3D/HAR11/HAR11_2D3D_renderMayaBatchServer.png',
None], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/2D3D/steps/HAR9_Rendering_006.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/2D3D/steps/HAR9_Rendering_007.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/2D3D/z_buffer_truck_noeffect.png',
None], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/2D3D/tree.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/Stage/2D3D/HAR11/HAR11_2D3D_ZBufferingSmoothing.png',
None], dtype=object) ] | docs.toonboom.com |
macOS installation¶
We outline the steps for installing MRtrix3 on macOS. Please consult the MRtrix3 forum if you encounter any issues with the configure, build or runtime operations of MRtrix3.
Check requirements¶
To install MRtrix3 , you will need the following:
- a C++11 compliant compiler (e.g. clang in Xcode)
- Python version >= 2.7 (already included in macOS)
- The zlib compression library (already included in macOS)
- Eigen version >= 3.2
- Qt version >= 5.1 [GUI components only] - important: versions prior to this will not work
and optionally:
- libTIFF version >= 4.0 (for TIFF support)
- FFTW version >= 3.0 (for improved performance in certain applications, currently only
mrdegibbs)
Warning
To run the GUI components of MRtrix3 (
mrview &
shview), you will also need:
Note
If you currently do not plan to contribute to the MRtrix3 code, the most convenient way to install Mrtrix3 on macOS is to install it via homebrew.
If you do not have homebrew installed, you can install it via:
/usr/bin/ruby -e "$(curl -fsSL)"
You need to add the MRtrix3 tap to homebrew:
brew tap MRtrix3/mrtrix3
You can now install the latest version of MRtrix3 with:
brew install mrtrix3
This should be all you need to do. For all installation options type
brew
info mrtrix3. MRtrix3 will get upgraded when you upgrade all homebrew
packages
brew update && brew upgrade. If you want to avoid upgrading
MRtrix3 the next time you upgrade homebrew you can do so with
brew pin
mrtrix3.
Install Dependencies¶
Update macOS to version 10.10 (Yosemite) or higher - OpenGL 3.3 will typically not work on older versions
Install XCode from the Apple Store
Install Eigen3 and Qt5.
There are several alternative ways to do this, depending on your current system setup. The most convenient is probably to use your favorite package manager (Homebrew or MacPorts), or install one of these if you haven’t already.
If you find your first attempt doesn’t work, please resist the temptation to try one of the other options: in our experience, this only leads to further conflicts, which won’t help installing MRtrix3 and will make things more difficult to fix later. Once you pick one of these options, we strongly recommend you stick with it, and consult the community forum if needed for advice and troubleshooting.
- Install Eigen3:
brew install eigen
- Install Qt5:
brew install qt5
- Install pkg-config:
brew install pkg-config
- Add Qt’s binaries to your path:
export PATH=`brew --prefix`/opt/qt5/bin:$PATH
- Install Eigen3:
port install eigen3
- Install Qt5:
port install qt5
- Install pkg-config:
port install pkgconfig
- Add Qt’s binaries to your path:
export PATH=/opt/local/libexec/qt5/bin:$PATH
As a last resort, you can manually install Eigen3 and Qt5: You can use this procedure if you have good reasons to avoid the other options, or if for some reason you cannot get either Homebrew or MacPorts to work.
- Install Eigen3: download and extract the source code from eigen.tuxfamily.org
- Install Qt5: download and install the latest version from
- You need to select the file labelled
qt-opensource-mac-x64-clang-5.X.X.dmg. Note that you need to use at least Qt 5.1, since earlier versions don’t support OpenGL 3.3. We advise you to use the latest version (5.7.0 as of the last update). You can choose to install it system-wide or just in your home folder, whichever suits - just remember where you installed it.
- Make sure Qt5 tools are in your PATH
- (edit as appropriate)
export PATH=/path/to/Qt5/5.X.X/clang_64/bin:$PATH
- Set the CFLAG variable for eigen
- (edit as appropriate)
export EIGEN_CFLAGS="-isystem /where/you/extracted/eigen"Make sure not to include the final
/Eigenfolder in the path name - use the folder in which it resides instead!
Install TIFF and FFTW library.
- Install TIFF:
brew install libtiff
- Install FFTW:
brew install fftw
- Install TIFF:
port install tiff
- Install FFTW:
port install fftw-3_profile
~/.profileor
~/.bashrc, e.g. as follows:./set_path ~/.profile
Keeping MRtrix3 up to date¶
You can update your installation at any time by opening a terminal, navigating to the MRtrix3 folder (e.g.
cd mrtrix3), and typing:
git pull ./build
If this doesn’t work immediately, it may be that you need to re-run the configure script:
./configure
and re-run step 1 again. | https://mrtrix.readthedocs.io/en/latest/installation/mac_install.html | 2018-09-19T00:09:12 | CC-MAIN-2018-39 | 1537267155792.23 | [] | mrtrix.readthedocs.io |
[ Tcllib Table Of Contents | Tcllib Index ]
tcldes(n) 1.1 "Data Encryption Standard (DES)"
Table Of Contents
Description
The tclDES package is a helper package for des.
Please see the documentation of des for details.
Bugs, Ideas, Feedback
This document, and the package it describes, will undoubtedly contain bugs and other problems. Please report such in the category des of the Tcllib Trackers. Please also report any ideas for enhancements you may have for either package and/or documentation. | http://docs.activestate.com/activetcl/8.5/tcl/tcllib/des/tcldes.html | 2018-09-18T23:39:29 | CC-MAIN-2018-39 | 1537267155792.23 | [] | docs.activestate.com |
CORAL Resources Module User Guide¶
About CORAL Resources¶
A component of Hesburgh Libraries locally developed ERM, CORAL Resources aids in the management of the electronic resource workflow from the initial request through the acquisition process and into ongoing support and maintenance. CORAL Resources supports the completion of these workflow processes with a convenient task-based queue in which automated email alerts indicate to staff when new tasks are available.
Component Overview¶
CORAL Resources has five major components in the primary navigation at the top of each page.
• New Resource
• My Queue
• File Import
• Admin
========
Home provides both search and A-Z browse access to the resource records. The Name (contains) field searches against resource name, resource alias, parent resource name and organization name. The sidebar also allows for searching across the Publisher, Platform, and ISBN/ISSN fields for more specific searching. Multiple fields can be combined in a single search for more precise searching. Search results can be exported to a spreadsheet using the excel icon in the upper right corner. The exported file includes more fields than what are displayed on the search results page.
All new resource records are added through this form. The form includes only select fields which are the most critical for processing new resource requests. The goal was to provide collection managers with a simple and easy to use form for submitting new resource requests. The only required fields are resource name, format and acquisition type.
The Format field is meant to indicate the publication medium requested with the most obvious values being print and electronic. Acquisition Type is primarily meant to indicate the distinction between paid and free resources. CORAL users may define their own Acquisition Type to meet the local needs. Resource Type is optional and provides additional context to the type of resource being requested. The values listed for all three of these fields can be edited through the Admin page detailed later in this user guide.
The form allows the option to either save or submit the information entered. The submit option will commit the new request and the system will initiate the workflow for that resource and will send out an email alert that a new resource has been entered. The save option will save the information to the user’s My Queue page Saved Requests where it can be completed and submitted at a later time.
Please note that the system does allow duplicate records for the same resource to be entered. The form will however alert the user that another record with the same resource name already exists.
The Resource Record¶
Product¶
The resource record contains nine tabs where the information about the resource is logically grouped. The first tab, Product, contains the descriptive information such as name, alias, issn, publisher, etc which identifies and defines the resource.
Edit product details¶
The information on the Product tab can be edited by following the Edit Product Details link or by using the edit icon on the tab. The Name, Description, URL, Format and Resource Type fields come from the Add New Resource form. The Parent field identifies a related parent resource and includes an auto- complete feature populated by existing resource records that already exist in the system. An example of a possible Parent relationship would be that between ISI Web of Science (as the parent) and ISI Proceedings Index. Another example could include a package record identified as the parent of a record for an individual journal.
The Product tab allows for the addition of multiple associated organizations and aliases. The values for the Organization Role and Alias Type fields can be customized through the Admin page detailed later in this document. The Organization field includes an auto-complete feature populated by the organizations which already exist in the system. A link to the organization’s record in the Organizations Module will appear in the ‘Helpful Links’ box, shown in the previous figure above, if the Organizations Module has been installed and the interoperability enabled. Please see the Technical Documentation and Install Guide for details on the proper configuration settings to enable this feature.
The Archived checkbox on the Edit Resource screen will set the record status as ‘Archived’. This was intended to be used to identify resources that were no longer available but for which there was value in retaining a record in CORAL.
Add new note¶
An additional notes feature has been added to the Product, Acquisitions and Access tab. The note will be visible only on the tab on which it is added. The Note Type field has been included to provide context to the note. The values for the Note Type field can be customized through the Admin page.
Orders¶
The Orders information allows for tracking of multiple orders for each resource. Users can choose between “create new order,” “clone order,” and “edit order information.”
Create and Edit order information¶
The order information can be created, cloned, or edited by using the create, clone, or edit order links located under Orders. The orders information is meant to provide description to local acquisitions. The Acquisition Type field is the same as was entered on the add new resource form.
The Acquisitions tab also includes the ability to track subscription periods and alert when the period expires. Enter a valid subscription end date and then check the ‘Enable Alert’ checkbox in cases where a subscription expiration alert is desired. The alert settings can be customized through the Admin page. The settings include the ability to set the email address to which the expiration alert will be sent (note this is a global setting) and the alert period. For example the alert can be set to activate on a specific number of days prior to the subscription end date. In order to implement the alerts feature the file sendAlerts.php will need to be run as a nightly cron. See the technical documentation and install guide for details.
Order Number is intended for the ILS order number and system number for the ILS bib system number. A link to the resource record in the libraries’ web OPAC can be dynamically displayed on the Acquisitions tab when a bib system number is entered. See the technical documentation and install guide for the necessary settings in the /admin/configuration.ini file to enable this link. Purchasing Site is intended to indicate the library or organization purchasing the resource. The values can be customized through the Admin page. Order number and System Number fields are meant to provide match points with the ILS.
Acquisitions¶
The Acquisitions tab contains details of the libraries’ acquisition of the resource such as order number, cost, fund, license status, etc.
Edit Cost History¶
Additional cost history can be added through the Edit Cost History link. This allows to track cost history for the same resource.
Note: If enhanced cost history is enabled, then the user will see the following additional fields.
Edit license and status¶
The Acquisitions tab includes information about the relevant license. Use the Edit license and status link or the matching icon to update the license information. The values for the Licensing Status field can be customized through the Admin page. Changes to the Licensing Status field are recorded in the History section. It is also possible through the Edit License window to associate the resource with all relevant license records in the Licensing Module. The License Record field includes an auto-complete feature that is populated by the names of all license records in the Licensing Module. See the technical documentation and install guide for the necessary settings in the /admin/configuration.ini file to enable this functionality. A link to each associated license is added to the Helpful Links section of the resource record for quick navigation between modules.
Access¶
The Access tab includes the information about how the resource is accessed including things such as IP versus username/password authentication, simultaneous user limits, authorized sites, etc. The access information can be edited by using the edit access information link or the matching edit icon.
Edit access information¶
The values for all fields on the Access tab except for username and password can be customized through the Admin page. The Authorized Site field is intended to indicate the sites or libraries which are permitted to use the resource. Administering Site is intended to indicate the site or library which is responsible for managing the access. The remaining fields provide the technical details of the access.
Authentication Type is intended to indicate how the resource is authenticated, such as IP or user/password login. Access Method is intended to indicate where the resource is accessed. This field was primarily added to identify resources that were hosted locally, perhaps on a citrix server or in an institutional digital repository, rather than on the publisher or provider’s website. Storage Location is intended for resources that have a physical component such as a CD or hard-drive backup to indicate where the resource is held.
The Username and Password fields on the Access tab are intended for use when the resource is accessed by patrons via a shared username and password login. This is not the login information used for resource administration. Administrative logins are to be stored on the Accounts tab.
Cataloging¶
The Cataloging tab includes data and workflow information related to cataloging the resource, including things such as the URL of where the cataloging records are coming from, cataloging type, cataloging status, and the number of records available and loaded. The cataloging information can be edited by using the Edit Cataloging Details link or the matching edit icon.
Edit Cataloging Details¶
The Cataloging Status and Cataloging Type fields can be customized through the Admin page. The Identifier can be an ILS bib record ID. The Source URL is intended to be the source of the catalog records used. The Cataloging Type is intended to indicate the cataloging approach. Values might include: Batch, Manual, and MARCit. The Cataloging Status is intended to identify the current status of the cataloging work. Values might include: Completed, Ongoing, and Rejected. Checking the OCLC Holdings checkbox indicates that the resource is made available in OCLC.
Contacts¶
The Contacts tab is the same as is found on an organization record in the Organizations Module. It is intended as a directory of contact information for publishers, vendors, etc. Contacts can be added directly to the resource record using the add contact link or they can be inherited from the Organizations Module, as in the figure above. When an organization is associated with the resource on the Product tab all contacts that exist for that specific organization will be inherited and displayed here on the Contacts tab. As with Organizations, the Resources Module includes a Contact Role field for each contact (support, invoicing, etc). The values for Contact Role can be edited through the Admin page.
Account¶
The Accounts tab is the same as is found on an organization record in the Organizations Module. It is intended to store the login credentials used for administrative tasks such as registering ip addresses, downloading usage statistics, and other administration tasks. Accounts can be added directly to the resource record using the add new account link or they can be inherited from the Organizations Module, as in the figure above. When an organization is associated with the resource on the Product tab all accounts that exist for that specific organization will be inherited and displayed here on the Accounts tab. As with Organizations, the Resources Module includes a Login Type field. The values for Login Type can be edited through the Admin page.
Issues¶
Issues related to a resource can be recorded in the Issues tab. Users can report an issue, view open issues or view archived issues. Downtime can also be recorded under Downtime section, where users may report a new downtime, or view current/upcoming downtime or view archived downtime.
The Report New Issue link allows users to enter a new issue. There are several required fields marked with a red star. To add a contact, use the Add Contact link. Users may choose to CC themselves or add additional CCs. All contacts and CCs will receive an email alert about the issue. Fill in the Subject field and a brief description about the issue in the Body field. The Applies to check box has three options and users can select only one of them. For the Applies to all Project Euclid resources option (shown in the example here), the issues will be recorded for all Project Euclid resource records in CORAL. If Applies to selected Project Euclid resources option is selected (as shown in the figure), a list of the available resources from the same organization will show up and users can select one or multiple items on the list. In the example shown, both items on the list are selected.
To look at all open issues, users can click on View Open Issues link and all open issues will be expanded below the linking text. Open issues can be closed by clicking on the Close link in the same view. See screenshot above. Open issues can be downloaded in a csv file by clicking on the excel icon beside the text View Open Issues.
The View Archived Issues link will display all archived/closed issues. Users can also download closed issues in a csv file by clicking on the excel icon.
Users can record downtime related to the resource record via the Report New Downtime link. The downtime report includes downtime start date, downtime resolution date, problem type and some notes. Please note, when it reaches the downtime resolution date, the downtime report will be archived automatically. If the Downtime Resolution date is not entered here, a Resolve link will show in the View Current/Upcoming Downtime link shown the figure below. Problem Type can be configured in Admin Downtime Type tab. New downtime can also be entered in the Organizations Module. Please note Report New Downtime in Organizations module is an optional feature, which can be turned on by updating the Organization module configuration file (resourceIssues=Y).
The View Current/Upcoming Downtime will display all current or upcoming downtime reports, either organizational level or resource level. To archive/close a downtime report, click on the resolve link and the archived downtime should appear in the View Archived Downtime section. Please note, there is a bug here, resolved downtime reports are not displaying in the designated section. A bug fix is in progress. Organization related downtime should be entered in the Organization module.
Attachments¶
Additional documents relevant to the resource can be uploaded and made available through the Attachments tab. Multiple attachments are grouped and sorted by attachment type.
New attachments are added using the Add New Attachment link. The Name field is intended to be a descriptive name for the attachment. The Details field allows for any additional information that further explains the attachment. The attachment Type field (email, title list, etc) provides context and allows for a way to group the attachments. The values for attachment Type can be edited through the Admin page.
Workflow¶
The Workflow tab shows the workflow through which the resource needs to follow. The workflow and routing rules can be customized through the Admin page. That process is described later on in this document.
The figure above shows a sample workflow. The first column ‘Step’ is the name of the task which needs to be performed. The second column ‘Group’ identifies the group responsible for the task. Individuals are assigned to these groups through the Admin page. The ‘Start Date’ identifies the date at which the task become active. An email is sent to the assigned group when the task becomes active alerting the group members that they now have a task to perform. The fourth column ‘Complete’ will identify the date when the task is completed and the person who complete the task. Clicking the ‘mark complete’ link will mark the task as complete. The last column “Delete” allows users to delete any unnecessary step. Please note the deleted steps only apply to this local resource record and it will not overwrite the workflow steps in Admin tab.
There are two additional steps which happen as part of each workflow that are not identified as tasks on the Routing tab. An email alert is sent out by the system when a new record is added. The email is sent to a master email address that is specified in a configuration file (configuration.ini). The variable which sets the email address is named ‘feedbackEmailAddress’. See the technical documentation and installation guide for instructions on editing the configuration file. An email alert is also sent out to this same address when all of the workflow steps have been marked as complete. The text of the email alerts is controlled through the use of template files. The templates exist in the /admin/emails/ directory. See the technical documentation and installation guide for more information on editing the email templates.
Workflow steps can be reassigned to a group by clicking on the pencil icon beside the group name. As shown in the figure above, selecting a group from the dropdown list will assign the step to a selected group. The reassignment can apply to all later steps if the checkbox “Apply to all later steps” is selected.
The Routing tab includes four additional features displayed as links on the bottom of the page as seen in the screenshot of Routing. The restart workflow link allows anyone with admin privilege to restart the entire workflow process, either in-progress workflows or completed workflows. All completed workflows will be archived automatically and can be viewed through the Display Archived Workflows link. The mark entire workflow complete link marks the entire workflow as complete even if there are unfinished tasks.
The Edit The Current Workflow link will open a Edit Workflow window, where users can make changes to the workflow. Available edit options include: add a new step, delete a step, assign a group to a step, assign parent step, move a step up or down and configure the number of days for the email reminder for any step. Task reminders are often seen in task management software. The email reminder function will remind users after the configured number of days when the step is assigned to the assigned group/member. It’s worth noting that the edits made here will not overwrite the workflow configuration in Admin tab.
My Queue¶
The My Queue page shows the user their recent activity and their outstanding tasks. The page is divided into three tabs: Outstanding Tasks, Saved Requests, and Submitted Requests. The Saved Requests tab displays new resource records which the user has saved to their queue but not yet submitted. The Submitted Requests tab displays the user’s recently submitted records which are still in process. Once the resource’s workflow is complete the record is automatically removed from this tab.
The Outstanding Tasks tab, as shown in the figure above, displays the resources for which the user has an active workflow task which has not yet been completed. Clicking on the resource name or ID number will open the full resource record. Resources are assigned to a user’s queue based on their association with a workflow group. Resources are removed from the user’s queue once the outstanding workflow task on the resource’s Routing tab is marked as complete.
File Import¶
File Import allows users to import a file into CORAL. Users may choose a file from a local drive. The file has to be a delimited CSV file with any of the three delimiter options: comma, semicolon or pipe delimited. The upload button loads the CSV into CORAL.
File Import Configuration¶
Next users need to configure the import settings in the Delimited File Import window. Users can select an existing import configuration, which is configured in Admin (see Admin section below for more details). Once selected, the column number will be populated in the form automatically. If users have not previously created an Import Configuration, then for each of the resource fields, users need to input the column number for each corresponding column in the CSV file. The column number for each column can be found right above the mapping fields in the top portion of the interface. For columns with multiple values that are character-delimited, indicate the delimiter using If delimited, delimited by field. For fields with values across multiple columns, add additional sets using the +Add another links. Use the Dedupe on this column option for ISBN/ISSN sets to ignore any duplicate values that might occur across those columns. The Alias Types, Note Types, and Organization Roles that you can assign to your mapped columns can be configured on the Admin page. Users can also map a set of order related fields in the Acquisitions section, which contains Fund Code, Cost, Order Type and Currency. Lastly, users have the option to enable sending emails when starting workflows, which can be triggered automatically if records imported have the fields (Resource Format, Resource Type and Acquisition) matching with any existing workflows. If users don’t want to bothered with the email notification, then leave this option unchecked.
Once submit, users will be taken to the import preview window. (shown in the screenshot below).
File Import Report Preview¶
The preview allows users to look at the import summary and choose to either proceed with the import or go back to the field mapping windows if anything goes wrong.
After the import is finally submitted, it’ll take users to the final results page. The import will be archived and users can access it again in Imports history.
Admin¶
The Admin page is available only to users with admin privileges. It is the page through which field values are customized and through which user privilege and access is set.
Edit User¶
The first tab on the page is for editing user accounts. There are three privilege levels for the Resources Module; add/edit, view only, and admin. View only is the default privilege for all users who do not have an existing user entry granting them additional privileges. The Accounts tab on the resource record may contain sensitive login credentials that only a select few users need to see. Checking the ‘View Accounts’ box on the edit user form will allow the user to see the Accounts tab, for all other users the Accounts tab will be hidden.
Workflow / User Group¶
The Workflow / User Group tab contains the settings which control the workflow routing features. The Resources Module allows for the creation of multiple workflow rules based on resource type. The figure above shows three workflows including one for paid electronic resources, one for free electronic resources and one for paid electronic monographs. New workflows are added using the add workflow link. Users can also copy any existing workflows by clicking on the yellow duplicate icon.
Edit workflow¶
The above figure shows the form through which workflows are created and edited. Here again the workflow being edited is for resources where the Acquisitions Type is ‘Paid’ and the Format is ‘Electronic’. These two fields are required and as such it is required that the values for these fields be defined before creating new workflows. The Resource Type field is optional but when used allows for more granular workflows.
The Workflow Steps section allows for the addition of as many steps (or tasks as this document also calls them) as desired. Enter step name, the group assigned to the step and the parent step when appropriate. Then click the ‘Add’ button to add the step to the workflow. Assigning a group to each step is required and as such it is necessary to create the groups before creating workflows.
The blue arrows on the left side of the form determine the display order in which the workflow steps will appear on the Routing tab. The arrows do not determine the order in which the steps occur in the workflow. That order is determined by the Parent Step. Workflow steps that have no parent step assigned will become active as soon as the new record is submitted. Workflow steps that have an assigned parent step will become active once the parent step is marked as complete.
Edit user group¶
Each step or task in a workflow must be assigned to a user group. Enter a group name and a group email address. An email alert will be sent to this address when a new workflow step assigned to the group becomes active. Users that are assigned to the group will then have the in-process resource appear on the Outstanding Tasks tabs of their My Queue pages.
Import Configuration¶
Here, users can add a new import configuration or edit an existing configuration.
The instruction for adding a new one or editing an existing one is similar to what’s in File Import section described earlier and a configuration name can be entered here. The corresponding column number in the importing csv file can be entered for any field shown in the screenshot.
Other Admin Settings¶
There are many other fields which can be customized through the Admin page. Select the field you wish to edit in the left hand column and then follow the add new links or the edit and delete icons to customize the field values. The fields have each been described in context earlier in this document. There is however an additional setting managed here on the Admin page which needs to be described in more detail; Alert Settings.
Alert settings¶
These alert settings determine the functionality of the alert feature associated with the subscription period end date on the Acquisitions tab of the resource record. Enter the email address to which the alert should be sent when the subscription period for a resource comes to a close. Then enter the number of days prior to expiration that you wish the email alert to be sent. For example, entering 30, 60, 90 days would result in the system sending an alert 90 days prior to the subscription end date, 60 days prior and 30 days prior. The system will also send an alert on the exact day of the subscription end date. | http://docs.coral-erm.org/en/latest/resources.html | 2018-09-18T23:49:55 | CC-MAIN-2018-39 | 1537267155792.23 | [array(['_images/Resources_Module_Home_3.0.png',
'Screenshot of Resources Home Page'], dtype=object)] | docs.coral-erm.org |
Imply's home screen is flat and all data cubes and dashboards are readily accessible from the home screen and the burger menu.
The home screen is the first screen that you see when you open Imply and it presents you with an overview of all your data cubes and dashboards.
From the home screen you can open any data cube or dashboard and you can also create new ones.
You can also use the carat icon to quickly duplicate, edit, or delete any of the items.
The navigation menu, located at the top left corner, is available from any view and can be used to easily navigate between views.
Clicking outside the menu will close it. | https://docs.imply.io/on-prem/visualize/index | 2018-09-18T23:21:54 | CC-MAIN-2018-39 | 1537267155792.23 | [] | docs.imply.io |
Internet Explorer 8 Deployment Guide
The.
To download the IEAK 8, see.
For a complete overview of the features of Internet Explorer 8, refer to the Internet Explorer 8 Feature Overview Guide..
How to deploy Internet Explorer 8
The process of deploying Internet Explorer 8 to your organization's users' computers is organized in this deployment guide as follows: | https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-it-pro/internet-explorer-8/cc985339(v=technet.10) | 2018-09-19T00:03:46 | CC-MAIN-2018-39 | 1537267155792.23 | [] | docs.microsoft.com |
aiocoap.numbers.optionnumbers module¶
Known values for CoAP option numbers
The values defined in OptionNumber correspond to the IANA registry “CoRE Parameters”, subregistries “CoAP Method Codes” and “CoAP Response Codes”.
The option numbers come with methods that can be used to evaluate their properties, see the OptionNumber class for details.
- class
aiocoap.numbers.optionnumbers.
OptionNumber¶
Bases:
aiocoap.util.ExtensibleIntEnum
A CoAP option number.
As the option number contains information on whether the option is critical, and whether it is safe-to-forward, those properties can be queried using the is_* group of methods.
Note that whether an option may be repeated or not does not only depend on the option, but also on the context, and is thus handled in the Options object instead. | https://aiocoap.readthedocs.io/en/latest/module/aiocoap.numbers.optionnumbers.html | 2018-09-19T00:29:08 | CC-MAIN-2018-39 | 1537267155792.23 | [] | aiocoap.readthedocs.io |
Switching to an IAM Role (Tools for Windows PowerShell) and configure them, see IAM Roles, and Creating IAM Roles..
This section describes how to switch roles when you work at the command line with the AWS Tools for Windows PowerShell.
Imagine that you have an account in the development environment and you occasionally
need to
work with the production environment at the command line using the Tools for Windows PowerShell. You already have one access key
credential set available to you. These can be an access key pair assigned to your
standard IAM
user. Or, if you signed-in as a federated user, they can be the access key pair for
the role
initially assigned to you. You can use these credentials to run the make use of your user permissions in the Development account because only
one set of
permissions can be in effect at a time.
Note
For security purposes, you can use AWS CloudTrail to audit the use of roles in the
account. The
cmdlet
Use-STSRole must include a
-RoleSessionName parameter with a
value between 2 and 64 characters long that can include letters, numbers, and the
=,.@- characters. The role session name identifies actions in CloudTrail logs that
are performed with the temporary security credentials. For more information, see CloudTrail Event Reference in the
AWS CloudTrail User Guide. AWS-east-2
For more information, see Using AWS Credentials in the AWS. | https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-twp.html | 2018-09-19T00:03:59 | CC-MAIN-2018-39 | 1537267155792.23 | [] | docs.aws.amazon.com |
Description
This section describes how to show a message to the user with a microflow.
Instructions
Open the microflow, or if necessary create a new one. If you do not know how to add documents to your project, please refer to this article.
The microflow in the screenshot has a ‘Customer’ object passed to it, as several of the attributes of the ‘Customer’ object will be used as parameters in the message.
Add a ‘Show message’ activity to the microflow. If you do not know how to add activities to a microflow please refer to this article.
Double-click on the ‘Show message’ activity to start configuring it.
Use the drop-down menu at ‘Type’ to choose what type of message you want to show to the user.
You can enter the message template with parameters in the ‘Template’ area.
You can enter parameters in the template with the use of braces; these will be filled in by the microflow when it generates the message.
You can add new parameters to the message using the ‘New’ button in the ‘Parameters’ area. Pressing this button will bring up a new window which lets you enter the microflow expression, of which the value will be inserted into the message at the parameter position.
If you have variables or attributes which are not strings, you can use the ‘toString’ expression to convert them.
Finally you can choose whether or not the message should be blocking by adding or removing a check mark at ‘Blocking’
The end result in this screenshot is a blocking information message to a user congratulating them with a customer status upgrade, with customer specific information added through parameters. | https://docs.mendix.com/howto40/display-a-message-with-a-microflow | 2018-09-18T22:53:42 | CC-MAIN-2018-39 | 1537267155792.23 | [array(['attachments/2621597/2752891.png', None], dtype=object)
array(['attachments/2621597/2752892.png', None], dtype=object)
array(['attachments/2621597/2752893.png', None], dtype=object)
array(['attachments/2621597/2752890.png', None], dtype=object)
array(['attachments/2621597/2752889.png', None], dtype=object)
array(['attachments/2621597/2752894.png', None], dtype=object)] | docs.mendix.com |
.
Important: When creating request packets, put nodes and elements in the order they follow in the packet structure.
Response Packet Structure
The add-user node of the output XML packet is presented by type ProtectedDirAddUserOutput (
protected_dir.xsd) and structured as follows:
- The result node is required. It wraps the response retrieved from the server. Data type: resultType (
common.xsd).
- The status node is required. It specifies the execution status of the operation. Data type: string. Allowed values: ok | error.
- The errcode node is optional. Is returns the error code if the operation fails. Data type: integer.
- The errtext node is optional. It returns the error message if the operation fails. Data type: string.
- The id node is> | https://docs.plesk.com/en-US/12.5/api-rpc/reference/managing-protected-directories/creating-protected-directory-user.49412/ | 2018-09-18T23:19:36 | CC-MAIN-2018-39 | 1537267155792.23 | [array(['/en-US/12.5/api-rpc/images/49414.gif', 'CreatePDUserRPS'],
dtype=object)
array(['/en-US/12.5/api-rpc/images/49417.gif', None], dtype=object)] | docs.plesk.com |
When you configure a virtual flash resource to be used by ESXi hosts and virtual machines, several considerations apply.
You can have only one virtual flash resource, also called a VFFS volume, on a single ESXi host. The virtual flash resource is managed only at the host's level.
You cannot use the virtual flash resource to store virtual machines. Virtual flash resource is a caching layer only.
You can use only local flash devices for the virtual flash resource.
You can create the virtual flash resource from mixed flash devices. All device types are treated the same and no distinction is made between SAS, SATA, or PCI express connectivity. When creating the resource from mixed flash devices, make sure to group similar performing devices together to maximize performance.
You cannot use the same flash devices for the virtual flash resource and vSAN. Each requires its own exclusive and dedicated flash device.
The total available capacity of the virtual flash resource can be used by ESXi hosts as host swap cache and by virtual machines as read cache.
You cannot select individual flash devices for swap cache or read cache. All flash devices are combined into a single flash resource entity. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-4A5DBC80-F3E0-4AB6-A025-690EC038F58D.html | 2018-09-18T23:00:39 | CC-MAIN-2018-39 | 1537267155792.23 | [] | docs.vmware.com |
RadTabStrip Item Builder
The RadTabStrip Item Builder lets you populate your tab strip with a hierarchy of items that do not come from a separate data source. There are two ways to bring up the RadTabStrip Item Builder:
From the RadTabStrip Smart Tag, click on the Build RadTabStrip link.
Right-click on the RadTabStrip component and select Build RadTabStrip from its context menu.
The Build RadTabStrip option is not available in the Smart Tag or context menu if the RadTabStrip control is bound to a data source.
RadTabStrip Item Builder
The RadTabStrip Item builder lets you add, rearrange, configure, and delete tabs. These actions are initiated using the tool bar at the upper left of the Item builder:
The following table describes the controls in the tool bar:
When a tab (either a root item or a child item) is selected, the properties pane on the right of the RadTabStrip Item Builder lets you configure the item by setting its properties. For each item,
Text is the text that appears on the tab.
ToolTip is the text of a tooltip that appears when the user hovers the mouse over the tab.
Value is a string value that you can associate with the tab for use when programming the tab strip behavior.
IsSeparator specifies whether the tab acts as a separator.
IsBreak specifies whether the tab strip displays the next tab in the collection in another row.
Enabled controls whether the tab is initially enabled or disabled.
Selected controls whether the tab is initially selected.
SelectedIndex specifies which child item of the tab is initially selected.
NavigateUrl and Target cause the tab to automatically launch another Web page (specified by NavigateUrl) in the window specified by Target. If the Target property is not set, the new Web page uses the current browser window.
PostBack specifies whether the tab causes a postback when the user selects it.
ScrollChildren, PerTabScrolling, ScrollButtonsPosition, and ScrollPosition specify how the tab scrolls its child items when there is not enough room to display them all.
CssClass, SelectedCssClass, DisabledCssClass, HoveredCssClass, and ChildGroupCssClass control the appearance of the tab when it is in its normal state, selected, disabled, under the mouse, and the appearance of its group of child items, respectively.
ImageUrl, SelectedImageUrl, DisabledImageUrl, and HoveredImageUrl let you specify an image that appears on the left of the tab text when it is in its normal state, selected, disabled, and when the mouse hovers over it, respectively. | https://docs.telerik.com/devtools/aspnet-ajax/controls/tabstrip/design-time/radtabstrip-item-builder | 2018-09-19T00:00:25 | CC-MAIN-2018-39 | 1537267155792.23 | [array(['images/tabstrip_itembuilderwithitems.png',
'Item builder with items'], dtype=object)
array(['images/tabstrip_itembuildertoolbar.png', 'Item builder toolbar'],
dtype=object) ] | docs.telerik.com |
- Before you install
- Linux installation
- MacOS X installation
- Windows installation
- HPC clusters installation
Getting started
- Key features
- Configuration file
- Images and other data
- Command-line usage
- Troubleshooting
Tutorials
- Basic DWI processing
- DWI denoising
- Structural connectome for Human Connectome Project (HCP)
- Using the connectome visualisation tool
- Advanced debugging
- Warping images using warps generated from other packages
- Frequently Asked Questions (FAQ)
Workflows
- DWI Pre-processing for Quantitative Analysis
- Fixel-Based Analysis (FBA)
- Anatomically-Constrained Tractography (ACT)
- Spherical-deconvolution Informed Filtering of Tractograms (SIFT)
- Structural connectome construction
- Global tractography
- Multi-tissue constrained spherical deconvolution
Concepts | https://mrtrix.readthedocs.io/en/0.3.15/ | 2018-09-19T00:09:22 | CC-MAIN-2018-39 | 1537267155792.23 | [] | mrtrix.readthedocs.io |
.
About this release
Copied! Failed! | https://docs.citrix.com/en-us/hdx-optimization/2-9-ltsr/whats-new/2-9-ltsr-initial-release.html | 2021-01-15T18:02:23 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.citrix.com |
Important: #89720 - Only TypoScript files loaded on directory import¶
See Issue #89720
Description¶
With Issue #82812 the new
@import syntax for importing TypoScript has been added.
Among others the change was documented to only load
*.typoscript files in case a directory is imported. However, this was not implemented as such and all files where imported instead.
The code has been fixed to only load
*.typoscript files on directory import. To load other files besides
*.typoscript a suitable file pattern must be added explicitly now:
# Import TypoScript files with legacy ".txt" extension @import 'EXT:myproject/Configuration/TypoScript/Setup/*.txt' | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/10.3/Important-89720-OnlyTypoScriptFilesLoadedOnDirectoryImport.html | 2021-01-15T17:29:13 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.typo3.org |
Communication between fbw and ap processes. More...
Go to the source code of this file.
Communication between fbw and ap processes.
This unit contains the data structure used to communicate between the "fly by wire" process and the "autopilot" process. It must be linked once in a monoprocessor architecture, twice in a twin-processors architecture. In the latter case, the inter-mcu communication process (e.g. SPI) must fill and read these data structures.
Definition in file inter_mcu.h. | http://docs.paparazziuav.org/v5.16/inter__mcu_8h.html | 2021-01-15T18:44:23 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.paparazziuav.org |
Download balances or NAV report
There are a few steps which must be completed to download a report.
Create a report
+ POST /platform/reposrts/wallets/{report-type} // "wallets-snapshots" or "net-asset-values" + Response 201 Created { "fileId": "invoice-reports/f03380b714732592a42e57ccdfd591e9.xlsx" }
After the first step is completed, Copper will start the report generation. This process may take a while…
The next step is to download the report. You will receive a
404 (Not Found Error) until the report is generated, please carry on pull this route until
200 OK response.
+ GET /platform/files/{user-id}/files/{fileId} + Response 200 OK Binary Data | https://docs.copper.co/accounts/download-reports/ | 2021-01-15T17:44:21 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.copper.co |
SimplificationType¶ Summary N/A Remarks N/A Items Name Description TopologyPreserving Simplifies a point, ensuring that the result is a valid point having the same dimension and number of components as the input. The simplification uses a maximum distance difference algorithm similar to the one used in the Douglas-Peucker algorithm. In particular, if the input is an areal point ( Polygon or MultiPolygon ) The result has the same number of shells and holes (rings) as the input, in the same order The result rings touch at no more than the number of touching point in the input (although they may touch at fewer points). DouglasPeucker point while preserving topology use TopologySafeSimplifier. (However, using D-P is significantly faster). | https://docs.thinkgeo.com/products/desktop-maps/v12.0/ThinkGeo.Core/ThinkGeo.Core.SimplificationType/ | 2021-01-15T17:28:12 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.thinkgeo.com |
Monitoring¶
Information that the condor_collector collects can be used to monitor a pool. The condor_status command can be used to display snapshot of the current state of the pool. Monitoring systems can be set up to track the state over time, and they might go further, to alert the system administrator about exceptional conditions.
Ganglia¶
Support for the Ganglia monitoring system () is integral to HTCondor. Nagios () is often used to provide alerts based on data from the Ganglia monitoring system. The condor_gangliad daemon provides an efficient way to take information from an HTCondor pool and supply it to the Ganglia monitoring system.
The condor_gangliad gathers up data as specified by its configuration, and it streamlines getting that data to the Ganglia monitoring system. Updates sent to Ganglia are done using the Ganglia shared libraries for efficiency.
If Ganglia is already deployed in the pool, the monitoring of HTCondor
is enabled by running the condor_gangliad daemon on a single machine
within the pool. If the machine chosen is the one running Ganglia’s
gmetad, then the HTCondor configuration consists of adding
GANGLIAD to the definition of configuration variable
DAEMON_LIST
on that machine. It may be advantageous to run the condor_gangliad
daemon on the same machine as is running the condor_collector daemon,
because on a large pool with many ClassAds, there is likely to be less
network traffic. If the condor_gangliad daemon is to run on a
different machine than the one running Ganglia’s gmetad, modify
configuration variable
GANGLIA_GSTAT_COMMAND
to get the list of monitored hosts
from the master gmond program.
If the pool does not use Ganglia, the pool can still be monitored by a separate server running Ganglia.
By default, the condor_gangliad will only propagate metrics to hosts
that are already monitored by Ganglia. Set configuration variable
GANGLIA_SEND_DATA_FOR_ALL_HOSTS
to
True to set up a
Ganglia host to monitor a pool not monitored by Ganglia or have a
heterogeneous pool where some hosts are not monitored. In this case,
default graphs that Ganglia provides will not be present. However, the
HTCondor metrics will appear.
On large pools, setting configuration variable
GANGLIAD_PER_EXECUTE_NODE_METRICS
to
False will
reduce the amount of data sent to Ganglia. The execute node data is the
least important to monitor. One can also limit the amount of data by
setting configuration variable
GANGLIAD_REQUIREMENTS
. Be aware that aggregate sums over
the entire pool will not be accurate if this variable limits the
ClassAds queried.
Metrics to be sent to Ganglia are specified in all files within the
directory specified by configuration variable
GANGLIAD_METRICS_CONFIG_DIR
. Each file in the directory
is read, and the format within each file is that of New ClassAds. Here
is an example of a single metric definition given as a New ClassAd:
[ Name = "JobsSubmitted"; Desc = "Number of jobs submitted"; Units = "jobs"; TargetType = "Scheduler"; ]
A nice set of default metrics is in file:
$(GANGLIAD_METRICS_CONFIG_DIR)/00_default_metrics.
Recognized metric attribute names and their use:
- Name
-
The name of this metric, which corresponds to the ClassAd attribute name. Metrics published for the same machine must have unique names.
- Value
-
A ClassAd expression that produces the value when evaluated. The default value is the value in the daemon ClassAd of the attribute with the same name as this metric.
- Desc
-
A brief description of the metric. This string is displayed when the user holds the mouse over the Ganglia graph for the metric.
- Verbosity
-
The integer verbosity level of this metric. Metrics with a higher verbosity level than that specified by configuration variable
GANGLIA_VERBOSITYwill not be published.
- TargetType
-
A string containing a comma-separated list of daemon ClassAd types that this metric monitors. The specified values should match the value of
MyTypeof the daemon ClassAd. In addition, there are special values that may be included. “Machine_slot1” may be specified to monitor the machine ClassAd for slot 1 only. This is useful when monitoring machine-wide attributes. The special value “ANY” matches any type of ClassAd.
- Requirements
-
A boolean expression that may restrict how this metric is incorporated. It defaults to
True, which places no restrictions on the collection of this ClassAd metric.
- Title
-
The graph title used for this metric. The default is the metric name.
- Group
-
A string specifying the name of this metric’s group. Metrics are arranged by group within a Ganglia web page. The default is determined by the daemon type. Metrics in different groups must have unique names.
- Cluster
-
A string specifying the cluster name for this metric. The default cluster name is taken from the configuration variable
GANGLIAD_DEFAULT_CLUSTER.
- Units
-
A string describing the units of this metric.
- Scale
-
A scaling factor that is multiplied by the value of the
Valueattribute. The scale factor is used when the value is not in the basic unit or a human-interpretable unit. For example, duty cycle is commonly expressed as a percent, but the HTCondor value ranges from 0 to 1. So, duty cycle is scaled by 100. Some metrics are reported in KiB. Scaling by 1024 allows Ganglia to pick the appropriate units, such as number of bytes rather than number of KiB. When scaling by large values, converting to the “float” type is recommended.
- Derivative
-
A boolean value that specifies if Ganglia should graph the derivative of this metric. Ganglia versions prior to 3.4 do not support this.
- Type
-
A string specifying the type of the metric. Possible values are “double”, “float”, “int32”, “uint32”, “int16”, “uint16”, “int8”, “uint8”, and “string”. The default is “string” for string values, the default is “int32” for integer values, the default is “float” for real values, and the default is “int8” for boolean values. Integer values can be coerced to “float” or “double”. This is especially important for values stored internally as 64-bit values.
- Regex
-
This string value specifies a regular expression that matches attributes to be monitored by this metric. This is useful for dynamic attributes that cannot be enumerated in advance, because their names depend on dynamic information such as the users who are currently running jobs. When this is specified, one metric per matching attribute is created. The default metric name is the name of the matched attribute, and the default value is the value of that attribute. As usual, the
Valueexpression may be used when the raw attribute value needs to be manipulated before publication. However, since the name of the attribute is not known in advance, a special ClassAd attribute in the daemon ClassAd is provided to allow the
Valueexpression to refer to it. This special attribute is named
Regex. Another special feature is the ability to refer to text matched by regular expression groups defined by parentheses within the regular expression. These may be substituted into the values of other string attributes such as
Nameand
Desc. This is done by putting macros in the string values. “\1” is replaced by the first group, “\2” by the second group, and so on.
- Aggregate
-
This string value specifies an aggregation function to apply, instead of publishing individual metrics for each daemon ClassAd. Possible values are “sum”, “avg”, “max”, and “min”.
- AggregateGroup
-
When an aggregate function has been specified, this string value specifies which aggregation group the current daemon ClassAd belongs to. The default is the metric
Name. This feature works like GROUP BY in SQL. The aggregation function produces one result per value of
AggregateGroup. A single aggregate group would therefore be appropriate for a pool-wide metric. As an example, to publish the sum of an attribute across different types of slot ClassAds, make the metric name an expression that is unique to each type. The default
AggregateGroupwould be set accordingly. Note that the assumption is still that the result is a pool-wide metric, so by default it is associated with the condor_collector daemon’s host. To group by machine and publish the result into the Ganglia page associated with each machine, make the
AggregateGroupcontain the machine name and override the default
Machineattribute to be the daemon’s machine name, rather than the condor_collector daemon’s machine name.
- Machine
-
The name of the host associated with this metric. If configuration variable
GANGLIAD_DEFAULT_MACHINEis not specified, the default is taken from the
Machineattribute of the daemon ClassAd. If the daemon name is of the form
Machinewill be the name of the condor_collector host.
- IP
-
A string containing the IP address of the host associated with this metric. If
GANGLIAD_DEFAULT_IPis not specified, the default is extracted from the
MyAddressattribute of the daemon ClassAd. This value must be unique for each machine published to Ganglia. It need not be a valid IP address. If the value of
Machinecontains an “@” sign, the default IP value will be set to the same value as
Machinein order to make the IP value unique to each instance of HTCondor running on the same host.
Absent ClassAds¶.
GPUs¶
HTCondor supports monitoring GPU utilization for NVidia GPUs. This feature
is enabled by default if you set
use feature : GPUs in your configuration
file.
Doing so will cause the startd to run the
condor_gpu_utilization tool.
This tool polls the (NVidia) GPU device(s) in the system and records their
utilization and memory usage values. At regular intervals, the tool reports
these values to the condor_startd, assigning them to each device’s usage
to the slot(s) to which those devices have been assigned.
Please note that
condor_gpu_utilization can not presently assign GPU
utilization directly to HTCondor jobs. As a result, jobs sharing a GPU
device, or a GPU device being used by from outside HTCondor, will result
in GPU usage and utilization being misreported accordingly.
However, this approach does simplify monitoring for the owner/administrator of the GPUs, because usage is reported by the condor_startd in addition to the jobs themselves.
DeviceGPUsAverageUsage
-
The number of seconds executed by GPUs assigned to this slot, divided by the number of seconds since the startd started up.
DeviceGPUsMemoryPeakUsage
-
The largest amount of GPU memory used GPUs assigned to this slot, since the startd started up. | https://htcondor.readthedocs.io/en/v8_9_9/admin-manual/monitoring.html | 2021-01-15T18:00:18 | CC-MAIN-2021-04 | 1610703495936.3 | [] | htcondor.readthedocs.io |
condor_watch_q¶
Track the status of jobs over time.
Synopsis¶
condor_watch_q [-help]
condor_watch_q [general options] [display options] [behavior options] [tracking options]
Description¶
condor_watch_q is a tool for tracking the status of jobs over time
without repeatedly querying the condor_schedd. It does this by reading
job event log files.
These files may be specified directly (the
-files option),
or indirectly via a single query to the condor_schedd when condor_watch_q
starts up (options like
-users or
-clusters).
condor_watch_q provides a variety of options for output formatting, including: colorized output, tabular information, progress bars, and text summaries. These display options are highly-customizable via command line options.
condor_watch_q also provides a minimal language for exiting when certain conditions are met by the tracked jobs. For example, it can be configured to exit when all of the tracked jobs have terminated.
Examples¶
If no users, cluster ids, or event logs are given, condor_watch_q will default to tracking all of the current user’s jobs. Thus, with no arguments,
condor_watch_q
will track all of your currently-active clusters.
To track jobs from a specific cluster,
use the
-clusters option, passing the cluster ID:
condor_watch_q -clusters 12345
To track jobs from a specific user,
use the
-users option, passing the user’s name
the actual query will be the for the
Owner job ad attribute):
condor_watch_q -users jane
To track jobs from a specific event log file,
use the
-files option, passing the path to the event log:
condor_watch_q -users /home/jane/events.log
To track jobs from a specific batch,
use the
-batches option, passing the batch name:
condor_watch_q -batches BatchOfJobsFromTuesday
All of the above “tracking” options can be used together, and multiple values
may be passed to each one. For example, to track all of the jobs that are:
owned by
jane or
jim, in cluster
12345,
or in the event log
/home/jill/events.log, run
condor_watch_q -users jane jim -clusters 12345 -files /home/jill/events.log
By default, condor_watch_q will never exit on its own (unless it encounters an error or it is not tracking any jobs). You can tell it to exit when certain conditions are met. For example, to exit with status 0 when all of the jobs it is tracking are done or with status 1 when any job is held, you could run
condor_watch_q -exit all,done,0 -exit any,held,1
Options¶
General Options¶
-
Display the help message and exit.
- -debug
-
Causes debugging information to be sent to
stderr.
Tracking Options¶
These options control which jobs condor_watch_q will track, and how it discovers them.
- -users USER [USER …]
-
Choose which users to track jobs for. All of the user’s jobs will be tracked. One or more user names may be passed.
- -clusters CLUSTER_ID [CLUSTER_ID …]
-
Which cluster IDs to track jobs for. One or more cluster ids may be passed.
- -files FILE [FILE …]
-
Which job event log files (i.e., the
logfile from
condor_submit) to track jobs from. One or more file paths may be passed.
- -batches BATCH_NAME [BATCH_NAME …]
-
Which job batch names to track jobs for. One or more batch names may be passed.
- -collector COLLECTOR
-
Which collector to contact to find the schedd, if needed. Defaults to the local collector.
- -schedd SCHEDD
-
Which schedd to contact for queries, if needed. Defaults to the local schedd.
Behavior Options¶
- -exit GROUPER,JOB_STATUS[,EXIT_STATUS]
-
Specify conditions under which condor_watch_q should exit.
GROUPERis one of
all,
anyor
none.
JOB_STATUSis one of
active,
done,
idle, or
held. The “active” status means “in the queue”, and includes jobs in the idle, running, and held states.
EXIT_STATUSmay be any valid exit status integer. To specify multiple exit conditions, pass this option multiple times. condor_watch_q will exit when any of the conditions are satisfied.
Display Options¶
These options control how condor_watch_q formats its output.
Many of them are “toggles”:
-x enables option “x”, and
-no-x disables it.
- -groupby {batch, log, cluster}
-
How to group jobs into rows for display in the table. Must be one of
batch(group by job batch name),
log(group by event log file path), or
cluster(group by cluster ID). Defaults to
batch.
- -table/-no-table
-
Enable/disable the table. Enabled by default.
- -progress/-no-progress
-
Enable/disable the progress bar. Enabled by default.
- -row-progress/-no-row-progress
-
Enable/disable the progress bar for each row. Enabled by default.
- -summary/-no-summary
-
Enable/disable the summary line. Enabled by default.
- -summary-type {totals, percentages}
-
Choose what to display on the summary line,
totals(the number of each jobs in each state), or
percentages(the percentage of jobs in each state, of the total number of tracked jobs) By default, show
totals.
- -updated-at/-no-updated-at
-
Enable/disable the “updated at” line. Enabled by default.
- -abbreviate/-no-abbreviate
-
Enable/disable abbreviating path components to the shortest somewhat-unique prefix. Disabled by default.
- -color/-no-color
-
Enable/disable colored output. Enabled by default if connected to a tty. Disabled on Windows if colorama is not available ().
- -refresh/-no-refresh
-
Enable/disable refreshing output. If refreshing is disabled, output will be appended instead. Enabled by default if connected to a tty.
Exit Status¶
Returns
0 when sent a SIGINT (keyboard interrupt).
Returns
0 if no jobs are found to track.
Returns
1 for fatal internal errors.
Can be configured via the
-exit option to return any valid exit status when
a certain condition is met. | https://htcondor.readthedocs.io/en/v8_9_9/man-pages/condor_watch_q.html | 2021-01-15T17:36:10 | CC-MAIN-2021-04 | 1610703495936.3 | [] | htcondor.readthedocs.io |
ConfigSlurper = ""
Settings can either be bound into nested maps or onto a specified JavaBean instance. In the case of the latter an error will be thrown if a property cannot be bound.
Constructs a new ConfigSlurper instance using the given environment
env- The Environment to use
Parses a ConfigObject instances from an instance of java.util.Properties
The- java.util.Properties instance
Parse the given script as a string and return the configuration object
Create a new instance of the given script class and parse a configuration object from it
Parse the given script into a configuration object (a Map) (This method creates a new class to parse the script each time it is called.)
script- The script to parse
Parses a Script represented by the given URL into a ConfigObject
scriptLocation- The location of the script to parse
Parses the passed groovy.lang.Script instance using the second argument to allow the ConfigObject to retain an reference to the original location other Groovy script
script- The groovy.lang.Script instance
location- The original location of the Script as a URL
Sets any additional variables that should be placed into the binding when evaluating Config scripts | http://docs.groovy-lang.org/latest/html/gapi/groovy/util/ConfigSlurper.html | 2016-07-23T19:17:55 | CC-MAIN-2016-30 | 1469257823387.9 | [] | docs.groovy-lang.org |
public final class Converters extends Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public Converters()
public static Set<ConverterFactory> getConverterFactories(Hints hints)
ConverterFactoryinterface.
hints- An optional map of hints, or
nullif none.
public static Set<ConverterFactory> getConverterFactories(Class<?> source, Class<?> target)
ConverterFactory's which can handle convert from the source to destination class.
This method essentially returns all factories in which the following returns non null.
factory.createConverter( source, target );
public static <T> T convert(Object source, Class<T> target)
Convenience for
convert(Object, Class, Hints)
source- The object to convert.
target- The type of the converted value.
nullif a converter could not be found
public static <T> T convert(Object source, Class<T> target, Hints hints)
This method uses the
ConverterFactory extension point to find a converter capable
of performing the conversion. The first converter found is the one used. Using this class
there is no way to guarantee which converter will be used.
source- The object to convert.
target- The type of the converted value.
hints- Any hints for the converter factory.
nullif a converter could not be found. | http://docs.geotools.org/stable/javadocs/org/geotools/util/Converters.html | 2019-10-13T23:26:45 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.geotools.org |
This page provides details on how the settings for the V-Ray Physical Camera work...
Custom balance – Specifies custom white balance.
Temperature (K) – Specifies the temperature (in Kelvins) when White balance is set to Temperature..).. | https://docs.chaosgroup.com/pages/viewpage.action?pageId=38573773&spaceKey=VRAY4MAX | 2019-10-13T22:18:33 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.chaosgroup.com |
Troubleshooting
- My shop is not in English, can I translate the texts in Shipping Rates Calculator+
- A shipment too heavy error is displayed
- Does the Turbo theme performance mode affect the app?
- Will I be double-charged after reinstalling?
- The Shipping Rates Calculator+ widget appears twice
- App auto-configuration on theme publication
- Is the app script file optimised?
- I think I've been charged during the trial period
- How can I create a staff account for you?
- How to create a preview link when the store is not publicly available
- How to prevent the calculator to appear in a particular element
- Does Shipping Rates Calculator+ modify my theme templates?
- How can I access the app preferences page?
- What is the Shopify storefront password?
- How to place the app in a particular form
- I've been charged before the trial expiration
- How can you access the app preferences without my login details?
- Can I prevent the widget to appear in some place?
- My app preferences are not updated | https://docs.codeblackbelt.com/category/329-troubleshooting | 2019-10-13T22:40:51 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.codeblackbelt.com |
-
Index API
The Index API exposes services which allow you to retrieve the permission model and effective permissions of any given item in the index of a Coveo Cloud organization.
In the Coveo Cloud administration console, the Content Browser page uses the Index API when viewing an items permission properties (see Review Item Properties).
The articles in this section cover various Index API use cases.
Interactive generated reference documentation is also available through Swagger UI (see Coveo Cloud Platform API - Index API).
Display Mode
People also viewed | https://docs.coveo.com/en/1481/ | 2019-10-13T23:08:21 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.coveo.com |
8 Statistics Functions
This module exports functions that compute statistics, meaning summary values for collections of samples, and functions for managing sequences of weighted or unweighted samples.
Most of the functions that compute statistics accept a sequence of nonnegative reals that correspond one-to-one with sample values. These are used as weights; equivalently counts, pseudocounts or unnormalized probabilities. While this makes it easy to work with weighted samples, it introduces some subtleties in bias correction. In particular, central moments must be computed without bias correction by default. See Expected Values for a discussion.
8.1 Expected Values
Functions documented in this section that compute higher central moments, such as variance, stddev and skewness, can optionally apply bias correction to their estimates. For example, when variance is given the argument #:bias #t, it multiplies the result by (/ n (- n 1)), where n is the number of samples.
Because the magnitude of the bias correction for weighted samples cannot be known without user guidance, in all cases, the bias argument defaults to #f.
8.2 Running Expected Values
The statistics object allows computing the sample minimum, maximum, count, mean, variance, skewness, and excess kurtosis of a sequence of samples in O(1) space.
The min and max fields are the minimum and maximum value observed so far, and the count field is the total weight of the samples (which is the number of samples if all samples are unweighted). The remaining, hidden fields are used to compute moments, and their number and meaning may change in future releases.
See Expected Values for the meaning of the bias keyword argument.
8.3 Correlation
See Expected Values for the meaning of the bias keyword argument.
8.4 Counting and Binning
If n = (length bounds), then bin-samples returns at least (- n 1) bins, one for each pair of adjacent (sorted) bounds. If some values in xs are less than the smallest bound, they are grouped into a single bin in front. If some are greater than the largest bound, they are grouped into a single bin at the end.
If lte? is a less-than-or-equal relation, the bins represent half-open intervals (min, max] (except possibly the first, which may be closed). If lte? is a less-than relation, the bins represent half-open intervals [min, max) (except possibly the last, which may be closed). In either case, the sorts applied to bounds and xs are stable.
Because intervals used in probability measurements are normally open on the left, prefer to use less-than-or-equal relations for lte?.
If ws is #f, bin-samples returns bins with #f weights.
8.5 Order Statistics
If p = 0, quantile returns the smallest element of xs under the ordering relation lt?. If p = 1, it returns the largest element.
For weighted samples, quantile sorts xs and ws together (using sort-samples), then finds the least x for which the proportion of its cumulative weight is greater than or equal to p.
For unweighted samples, quantile uses the quickselect algorithm to find the element that would be at index (ceiling (- (* p n) 1)) if xs were sorted, where n is the length of xs.
To compute an HPD interval from sorted samples, use hpd-interval/sorted.
You almost certainly want to use real-hpd-interval or real-hpd-interval/sorted instead, which are defined in terms of these.
8.6 Simulations
The functions in this section support Monte Carlo simulation; for example, quantifying uncertainty about statistics estimated from samples. | http://docs.racket-lang.org/math/stats.html | 2018-10-15T10:17:57 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.racket-lang.org |
» Imports
Imports enable a Sentinel policy to access reusable libraries and external data and functions. Anyone can write their own custom import. Imports are what enable Sentinel policies to do more than look at only local context for making policy decisions.") } | https://docs.hashicorp.com/sentinel/concepts/imports | 2018-10-15T11:16:24 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.hashicorp.com |
The DC/OS configuration parameters are specified in YAML format in a
config.yaml file. This file is stored on your bootstrap node and is used during DC/OS installation to generate a customized DC/OS build.
Note: If you want to modify the configuration file after installation, you must follow the DC/OS upgrade process.
FormatFormat
Key-value pairsKey-value pairs
The config.yaml file is formatted as a list of key-value pairs.
For example:
bootstrap_url:
Config blocks and listsConfig blocks and lists
A config block is a group of settings. It consists of the following:
- A key followed by a colon for example:
agent_list:. The key of the config block must be on its own line, with no leading space.
- A list of values formatted by using a, such. | https://docs.mesosphere.com/1.11/installing/production/deploying-dcos/configuration/ | 2018-10-15T10:39:21 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.mesosphere.com |
Contents Now Platform Administration Previous Topic Next Topic IP Address Access Control ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share IP Address Access Control By default the list is empty, meaning that there are no particular restrictions on access to your instance. Before you beginRole required: admin Procedure Navigate to System Security > IP Address Access Control to see a list of your IP access controls. You may need to activate this module. You can add these types of rules: Allow: any IP address in this range is allowed to connect to this instance. Deny: any IP address in this range is not allowed to connect to this instance unless it is listed in an allow rule. Note: These rules also affect transferring update sets. To ensure that IP Address Access Control does not cause update sets to fail, add the target instance as an exception on the source instance. Example 1: Block a particular rangeAn example of how to block a particular range.Example 2: Block everyone except a particular rangeAn example of how to block everyone except a particular range. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-platform-administration/page/administer/login/task/t_AccessControl.html | 2018-10-15T11:02:37 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.servicenow.com |
NATS Network Communications
This topic describes NATS internal network communication paths with other Pivotal Application Service (PAS) components... | https://docs.pivotal.io/pivotalcf/2-1/security/networking/nats-network-paths.html | 2018-10-15T10:12:47 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.pivotal.io |
Contents IT Business Management Previous Topic Next Topic Change default values of copied project ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Change default values of copied project Change the default values of a copied project. Before you beginRole required: it_project_manager About this taskChild tasks are defined with the same relationships, each lasting for the same duration as the original tasks. All project tasks are set to Pending. Actual duration and the actual start and end dates are reset to null values. The state is set to New and percent complete is set to 0. Administrators can modify the copy_project UI page to determine which fields are reset or change the default values. Procedure Navigate to System UI > UI Pages. Open the copy_project record. In the Processing script field, modify the values for resetFields or defaultFields. For example: /* resetFields is the array containing the list of names of fields that need to be erased from the copied project tasks * defaultFields is the array containing the key, value pairs of field names and values that need to be set on the copied tasks */ var resetFields = new Array ( ) ; var defaultFields = { } ; resetFields. push ( "work_start" , "work_end" , "work_duration" ) ; defaultFields [ "state" ] = "-5" ; defaultFields [ "percent_complete" ] = "0" ; Click Update. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-it-business-management/page/product/project-management/task/t_ModifyTheCopyProjectUIPage.html | 2018-10-15T11:14:03 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.servicenow.com |
To help you start your day with purpose, the Daily Summary Email gives an overview of Pending tasks, Upcoming meetings and Team's status.
The first section of the email summarizes:
- Your total overdue tasks,
- Tasks due for that particular day and
- Tasks due for that week.
The next section contains details of Meetings scheduled for the day, their duration, attendee details, overdue tasks and organizer's name.
Last section of the email provides insights into the details of team and team members.
Disabling/Enabling Daily Emails
You will receive daily summary emails by default. If you would like to disable it, here are the steps:
- Click on the settings icon on the top right.
2. Access 'Profile Information' from the menu on the left. Uncheck the Daily Summary check box.
At the start of each day you will know what meetings you need to attend, what tasks you have to prioritize and what your team is working on. | http://docs.meetnotes.co/overview-and-getting-started/daily-summary-email | 2018-10-15T11:10:46 | CC-MAIN-2018-43 | 1539583509170.2 | [array(['https://downloads.intercomcdn.com/i/o/38026853/c89ae6f2108155c49b60ce93/Screen+Shot+2017-10-31+at+3.47.10+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/38035351/9d1f6633af7f3519c278e189/Screen+Shot+2017-10-31+at+5.53.10+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/38039185/1c082a1052b9fad494ad8156/Screen+Shot+2017-10-31+at+6.25.29+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/50332609/4597081eea45784af4458e36/Pasted+image+at+2018_03_01+03_42+PM.png',
None], dtype=object) ] | docs.meetnotes.co |
目次を表示
Objectives
Running a WAM using the X_RUN command, enables the WebRoutine output to be saved to a file. This enables permanent (instead of dynamic) web pages to be created from your application. This may be an advantage when seeking to optimize search engines searches over your public web site. It can also be used as a method of saving results which will be fixed for a period of time (for example monthly statistics) rather than repeatedly running a WAM to calculate the same set of results for each user enquiry.
Output to a file from a WebRoutine is supported for Windows, IBM i and Linux servers but you need to be aware of differences with the X_RUN command parameters used. See a Saving a WAM's Output to a File for more details.
Since the WebRoutine output goes directly to an HTML file, it can only contain output generated by the WebRoutine.
To demonstrate WAM output to a file on Windows and IBM i (if available) you will complete the following:
Step 1. Output Employee Enquiry to a File
Step 2. Run WAM to output to a file in Windows
Step 3. Run WAM to output a file on IBM i
Summary
Before You Begin
Complete the introductory exercises, WAM005, WAM010, WAM015 and WAM020 before starting this exercise.
目次を表示 | https://docs.lansa.com/14/ja/lansa087/content/lansa/wamengt4_0755.htm | 2018-10-15T11:43:11 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.lansa.com |
[] Operator (C# Reference)
Square brackets (
[]) are used for arrays, indexers, and attributes. They can also be used with pointers.
Remarks:
System.Collections.Hashtable h = new System.Collections.Hashtable(); h["a"] = 123; // Note: using a string as the index.
Square brackets are also used to specify Attributes:
//.
C# Language Specification
For more information, see the C# Language Specification. The language specification is the definitive source for C# syntax and usage. | https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/operators/index-operator | 2018-10-15T11:42:32 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.microsoft.com |
Home > Admin Reference Admin commands and utilities reference. yb-ctl Command line utility to easily manage Linux and macOS based local clusters. yb-docker-ctl Command line utility to easily manage Docker based local clusters. docker-compose Manage Docker based local clusters via docker-compose. yb-master Manages metadata of data stored in yb-tserver and coordinates cluster-wide operations. yb-tserver Data node that hosts and serves user data. Give Feedback Was this page helpful? Yes No Help us improve our docs Email address Incorrect info Unclear info Not detailed enough Needs more examples Typo Something's broken Other Submit | https://docs.yugabyte.com/latest/admin/ | 2018-10-15T10:21:19 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.yugabyte.com |
Great results feeding Doc’s Eco Eggs Brew to Corals By Dr. Paul Whitby on Aug 21, 2014 Doctor Eco Systems is a relatively new, self styled group with a focus on providing low cost, high quality, food stuffs for your marine aquaria. I have met with the […]Read more →
Doc’s Eco Eggs Reviewed Posted on June 2, 2014 by Josh Saul We’re always on the lookout for new cool reef foods to drive our tank crazy, and wow, does Doc’s Eco Eggs deliver! This brine-stabilized blend of fish eggs comes in a squeeze type tube that […]Read more →
Doc’s Eco Eggs is a New All Purpose Food Made from Fish Eggs Posted on May 14, 2014 by Brandon Klaus at AquaNerds Doc’s Eco Eggs is a new all-purpose food from Doctor Ecosystems that has your fish, corals, and invertebrates square in its sights. Made from […]Read more →
Doc’s Eco Eggs Is a New Fish & Coral Food Made from Freshwater Fish Eggs Doc’s Eco Eggs is a new offering of freshwater fish eggs being applied for use in feeding corals and marine fish. The 1 to 2 mm diameter Doc’s Eco Eggs are very […]Read more →
Hi, I’m Doc from Doctor Eco Systems; I’ve been an avid hobbyist for 35 years and started Doctor Eco Systems because I, and the entire team here at Doc’s Eco, are committed to providing products that help mimic the environment in which our animals thrive in the […]Read more →
We recently added four new stores: Reefs-r-Us in Paxton, IL; Reef City USA in Peoria Heights, IL; Pet Country in Conway, AR; and Saltwater Addictionz in Stonington, IL! We’re now in nine states! Find us today. And if you don’t have a local store that has our […]Read more →
We are proud to announce that you can find Doc’s Eco Matter in two stores in Illinois and seven stores in Missouri! Find us at: Aquatic Treasures (Collinsville, IL) The Corner Reef (Columbia, IL) Aqua World (St. Louis, MO) Gateway Aquatics (St. Louis, MO) Lynn’s Pets (Wentzville, […]Read more →
Here’s a video of our Eco Matter starting with a pregnant female copepod, then a male with many live rotifers around it, and some phytoplankton.Read more →
We | http://docseco.com/docs-news/ | 2016-07-23T13:01:32 | CC-MAIN-2016-30 | 1469257822598.11 | [] | docseco.com |
Hiera 1: Overview
Included in Puppet Enterprise 3.8. A newer version is available; see the version menu above for details.
Hiera is a key/value lookup tool for configuration data, built to make Puppet better and let you set node-specific data without repeating yourself. See “Why Hiera?” below for more information, or get started using it right away:
Getting Started With Hiera
To get started with Hiera, you’ll need to do all of the following:
- Install Hiera, if it isn’t already installed.
- Make a
hiera.yamlconfig file.
- Arrange a hierarchy that fits your site and data.
- Write data sources.
- Use your Hiera data in Puppet (or any other tool).
After you have Hiera working, you can adjust your data and hierarchy whenever you need to. You can also test Hiera from the command line to make sure it’s fetching the right data for each node..
This way, you only have to write down the differences between nodes. When each node asks for a piece of data, it will get the specific value it needs.
To decide which data sources can override which, Hiera uses a configurable hierarchy. This ordered list can include both static data sources (with names like “common”) and dynamic ones (which can switch between data sources based on the node’s name, operating system, and more). | https://docs.puppet.com/hiera/1/ | 2016-07-23T13:03:10 | CC-MAIN-2016-30 | 1469257822598.11 | [] | docs.puppet.com |
H:\XML\FY14 CR\100113.149.XML XXXXXXXX doak 10/1/2013 22:59 XXXXXXXX 10/01/2013 22:03 l:\VA\100113\A100113.030.xml 10/01/2013 22:59:58 XXXXXXXX x:\xx\xxxxxx\xxxxxx.xxx.xml xx/xx/xxxx xx:xx:xx xx doak 1131-1001-810443 561733|2 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX [Discussion Draft] (Original Signature of Member) [DISCUSSION DRAFT] I 113th CONGRESS 1st Session H. R. __ IN THE HOUSE OF REPRESENTATIVES October 2, 2013 Mr. Rogers of Kentucky introduced the following bill; which was referred to the Committee on _______________ A BILL Making continuing appropriations during a Government shutdown to provide pay and allowances to members of the reserve components of the Armed Forces who perform inactive-duty training during such period. 1. Short title This Act may be cited as the Pay Our Guard and Reserve Act. 2. Continuing appropriations for pay and allowances for certain reserve component members of the Armed Forces (a) In general There are hereby appropriated for fiscal year 2014, out of any money in the Treasury not otherwise appropriated, for any period during which interim or full-year appropriations for fiscal year 2014 are not in effect such sums as are necessary to provide pay and allowances to members of the reserve components of the Armed Forces (as named in section 10101 of title 10, United States Code) who perform inactive-duty training (as defined in section 101(d)(7) of such title) during such period. (b) Termination Appropriations and funds made available and authority granted pursuant to this section shall be available until whichever of the following first occurs: (1) the enactment into law of an appropriation (including a continuing appropriation) for any purpose for which amounts are made available in this section; (2) the enactment into law of the applicable regular or continuing appropriations resolution or other Act without any appropriation for such purpose; or (3) January 1, 2015. | http://docs.house.gov/billsthisweek/20130930/BILLS-113hr-PIH-Guard.xml | 2016-07-23T13:07:24 | CC-MAIN-2016-30 | 1469257822598.11 | [] | docs.house.gov |
Search
View tree
Close tree
|
Preferences
|
|
Feedback
|
Legislature home
|
Table of contents
Previous file:
AJR5: Enrolled Joint Resolution
2009 Assembly Joint Resolution 7
ENROLLED JOINT RESOLUTION
Relating to:
the life and public service of Judge Ted E. Wedemeyer, Jr.
Whereas, the Honorable Ted E. Wedemeyer, Jr., departed this life on July 23, 2008; and
Whereas, Judge Wedemeyer was born in Milwaukee, Wisconsin, on August 30, 1932, graduated from Marquette High School and the College of the Holy Cross, earned his law degree from Marquette University, and earned a master's degree in taxation from the John Marshall School of Law; and
Whereas, Judge Wedemeyer served with distinction in the United States Air Force; and
Whereas, Judge Wedemeyer assisted in establishing the municipal court system in the city of Milwaukee in 1975, served as a municipal judge in that system, was appointed by Governor Schreiber to the circuit court in Milwaukee County in 1977, and was elected to the court of appeals in 1982, eventually serving as the presiding judge of District I; and
Whereas, during his judicial career, Judge Wedemeyer was actively involved in changing the court system to include night court and the municipal court, onsite court hearings and building code violations inspections, and a pilot study of cameras in the courtroom in Wisconsin; and
Whereas, Judge Wedemeyer made important contributions to the community as chair of the Board of Zoning Appeals in Milwaukee, as trustee of the Milwaukee Library System, as president of the Wisconsin Children's Service Society, as president of the Milwaukee Kickers, and as a member of the board of directors of the Wisconsin Soccer Association, Junior Achievement, Big Brothers Big Sisters, and Wisconsin Correctional Services; and
Whereas, Judge Wedemeyer was a founder and tireless promoter of youth soccer, particularly in his work developing the Milwaukee Kickers; and
Whereas, Judge Wedemeyer actively supported the German ethnic community in Milwaukee, serving as president of Goethe Haus Milwaukee, being involved in the Hessian Society of Wisconsin, and being a member of and performer with the Muellers German dance troupe; and
Whereas, Judge Wedemeyer was actively involved in the Wisconsin Easter Seal Society, St. Thomas More High School, the Milwaukee Christian Center, Volunteers of America, St. Joseph's Foundation, the American Legion, the Ancient Order of Hibernians, and the Wings of Corporate Mercy and was a member of the Fourth Degree Knights of Columbus; and
Whereas, Judge Wedemeyer was a founding member of the State Bar of Wisconsin Appellate Practice Section and served as one of its early chairs and, through work with the Judicial Council Appellate Procedure Committee, initiated many positive changes in state appellate practice rules; and
Whereas, Judge Wedemeyer was a friend and mentor to countless judges and lawyers throughout the state; now, therefore, be it
Resolved by the
assembly
, the
senate
concurring, That
the members of the Wisconsin legislature honor Judge Ted E. Wedemeyer, Jr., for his decades of public service and positive contribution to the development of the law and the fabric of the community in the city and county of Milwaukee and Wisconsin; and, be it further
Resolved, That
the assembly chief clerk shall provide a copy of this joint resolution to Ted E. Wedemeyer's wife, Susan Wedemeyer, and to his sister, Suzanne McKee.
Next file:
AJR9: Enrolled Joint Resolution
/2009/related/enrolled/ajr7
true
enrolledbills
/2009/related/enrolled/ajr7
enrolledbills/2009/REG/AJR7
enrolledbills/2009/REG/AJR7
section
true
PDF view
View toggle
Cross references for section
View sections affected
References to this
Reference lines
Clear highlighting
Permanent link here
Permanent link with tree | http://docs.legis.wisconsin.gov/2009/related/enrolled/ajr7 | 2012-05-24T07:55:36 | crawl-003 | crawl-003-008 | [] | docs.legis.wisconsin.gov |
Total Docs : 0
IIT
The Indian Institutes of Technology are autonomous public institutes of higher education, located in India. They are governed by the Institutes of Technology Act, 1961 which has declared them as institutions of national importance and lays down their powers, duties, and framework for governance. More Details
Important Dates
The padding Property
Exam Pattern
The padding : | https://docs.aglasem.com/org/iit/gate/question-paper?year=2019 | 2022-08-08T05:29:23 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['https://cdn.aglasem.com/assets/docs/images/org-logo/iit.jpg',
'IIT image'], dtype=object) ] | docs.aglasem.com |
Use tools for creating layouts with details, hatches and linetypes, and for printing.
Layouts set up your printed sheet.
Create a print layout viewport.
Manage layout viewport properties.
Manage layout detail viewports.
Conceal objects in a detail view.
Redisplay hidden objects in a detail view.
Redisplay selected hidden objects in a detail view.
Redisplay hidden layers in a detail view.
Conceal layers in a detail view.
Create a pattern of lines to fill bounding curves.
Set a starting point for existing hatches.
Manage the hatch settings for the current model.
Configures how linetypes display in viewports.
Specify a curve's linetype.
Send curves backward in draw order.
Send curves to back of draw order.
Bring curves forward in draw order.
Bring curves to the front in draw order.
Return curve draw order to the default.
Draw revision cloud curves.
Project geometry to the construction plane.
Print the current viewport or layouts.
Move or copy objects between layout and detail viewports.
Use text and dimensions for annotation
Rhinoceros 6 © 2010-2020 Robert McNeel & Associates. 11-Nov-2020 | http://docs.mcneel.com/rhino/6/help/en-us/seealso/sak_drafting.htm | 2022-08-08T04:22:34 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.mcneel.com |
Problemas conocidos con el nuevo plugin Editor - TinyMCE
From Joomla! Documentation
Outdated translations are marked like this..
Versiones afectadas
Información general
Esto se refiere sólo a la(s) siguiente(s) versión(es) de Joomla!: 3.7.0 | https://docs.joomla.org/J3.x:Known_issues_with_the_new_plugin_Editor_-_TinyMCE/es | 2022-08-08T04:16:05 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.joomla.org |
We have migrated few of our tables from on premise oracle to Dedicated SQL Pool via ADF. In that we have a table with blob datatype column. The records in that on-premise (oracle) table are in the format as shown below:-
But when we migrated to dedicated pool, we could seethe column datatype has changed to nvarchar and records are in the format as shown below:-
So is there any way to validate the data
Thanks | https://docs.microsoft.com/en-us/answers/questions/816037/blob-datatype-conversion.html | 2022-08-08T05:48:58 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['/answers/storage/attachments/193872-image.png',
'193872-image.png'], dtype=object)
array(['/answers/storage/attachments/193873-image.png',
'193873-image.png'], dtype=object) ] | docs.microsoft.com |
System Center Integration Pack for System Center Data Protection Manager 2010
Applies To: System Center 2012 - Orchestrator, System Center 2012 R2 Orchestrator, System Center 2012 SP1 - Orchestrator
The System Center Integration Pack for System Center Data Protection Manager 2010 is an add-in for System Center 2012 - Orchestrator
System Requirements
The DPM Integration Pack requires the following software to be installed and configured before you implement the integration. For more information about how to install and configure the Orchestrator and System Center Data Protection Manager, see the documentation for each of the following products:
System Center 2012 - Orchestrator
System Center Data Protection Manager 2010
Windows Management Framework
Downloading the Integration Pack
For information about how to obtain this integration pack, see System Center 2012 – Orchestrator 2012 Component Add-ons and Extensions.
Registering and Deploying the Integration Pack
After you download the integration pack file, you can register it with the Orchestrator and then deploy it to one or more action servers or clients. For more information about how to install integration packs, see How To Install an Integration Pack.
To register and deploy the integration pack
Copy the Data_Protection_Manager_2010_Integration_Pack.OIP integration pack file to the Orchestrator computer.
Confirm that the file is not set to Read Only as this can prevent unregistering the integration pack at a later date.
Click Start, point to All Programs, point to Microsoft System Center 2012, and then click Orchestrator. Right-click Deployment Manager, and then click Run as Administrator.
In the left pane of the Deployment Manager, expand Orchestrator Management Server. Right-click Integration Packs, and then click Register IP with the Management Server.
In the Select Integration Pack or Hotfix window, click Add. Locate and select the Data_Protection_Manager_2010_Integration_Pack.OIP file that you copied in step 1. Click Next.
In the Completing the Integration Pack Wizard dialog box, click Finish. The Log Entries pane displays a confirmation message when the integration pack is successfully registered.
In the left pane of Deployment Manager, right-click Integration Packs, and then click Deploy IP to Action Server or Client. Click Data Protection Manager 20120 Integration Pack, and then click Next.
Click the action server or client computer.
Warning
If you did not configure a deployment schedule, the integration pack deploys immediately to the computers that you specified. If you configured a deployment schedule, verify that the deployment occurred by verifying the event logs after the scheduled time has passed.
Windows Management Framework
The DPM Integration Pack uses Windows PowerShell remoting on the Runbook Designer and on the integration server and on the Data Protection Manager server. Perform the following tasks on the Orchestrator server and on the Data Protection Manager server before you configure the Data Protection Manager connection in the Runbook Designer.
To confirm the Windows Management Framework prerequisites
Confirm that you have Windows PowerShell 2.0 installed on the Orchestrator Orchestrator Orchestrator computer,.
Start Windows PowerShell with the "Run as administrator" option.
Type System Drive**:\PS>enable-psremoting**
For more information abouthow to use the Enable-PSRemoting cmdlet, see Enable PSRemoting in the Microsoft Knowledge Base (). Runbook Designer,
Warning
If you have to run multiple instances of an object, make sure that the activities run in series (not in parallel) so that Data Protection Manager performance is not adversely affected.
See System Center Data Protection Manager Activities for links to configuration instructions for the objects in this integration. | https://docs.microsoft.com/en-us/previous-versions/system-center/packs/hh531742(v=technet.10)?redirectedfrom=MSDN | 2022-08-08T04:56:12 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.microsoft.com |
Application Control provides the ability to define criteria that specifically block certain applications from executing. You can define block criteria to ensure that Application Control always blocks certain applications or you can create "Assessment" criteria to monitor the applications that users access.
The Application Control Criteria screen appears.
The Block Criteria Settings screen appears.
Application Control logs all applications that match the assessment criteria but takes no further action. Application Control allows the applications to execute normally. | https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/policies/policy-resources_001/application-control-/creating-application_001.aspx | 2022-08-08T03:28:52 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.trendmicro.com |
Data Synthesis Strategies¶
new in 0.6.0
pandera provides a utility for generating synthetic data purely from
pandera schema or schema component objects. Under the hood, the schema metadata
is collected to create a data-generating strategy using
hypothesis, which is a
property-based testing library.
Basic Usage¶
Once you’ve defined a schema, it’s easy to generate examples:
import pandera as pa schema = pa.DataFrameSchema( { "column1": pa.Column(int, pa.Check.eq(10)), "column2": pa.Column(float, pa.Check.eq(0.25)), "column3": pa.Column(str, pa.Check.eq("foo")), } ) print(schema.example(size=3))
column1 column2 column3 0 10 0.25 foo 1 10 0.25 foo 2 10 0.25 foo
Note that here we’ve constrained the specific values in each column using
Check s in order to make the data generation process
deterministic for documentation purposes.
Usage in Unit Tests¶
The
example method is available for all schemas and schema components, and
is primarily meant to be used interactively. It could be used in a script to
generate test cases, but
hypothesis recommends against doing this and
instead using the
strategy method to create a
hypothesis strategy
that can be used in
pytest unit tests.
import hypothesis def processing_fn(df): return df.assign(column4=df.column1 * df.column2) @hypothesis.given(schema.strategy(size=5)) def test_processing_fn(dataframe): result = processing_fn(dataframe) assert "column4" in result
The above example is trivial, but you get the idea! Schema objects can create
a
strategy that can then be collected by a pytest
runner. We could also run the tests explicitly ourselves, or run it as a
unittest.TestCase. For more information on testing with hypothesis, see the
hypothesis quick start guide.
A more practical example involves using
schema transformations. We can modify
the function above to make sure that
processing_fn actually outputs the
correct result:
out_schema = schema.add_columns({"column4": pa.Column(float)}) @pa.check_output(out_schema) def processing_fn(df): return df.assign(column4=df.column1 * df.column2) @hypothesis.given(schema.strategy(size=5)) def test_processing_fn(dataframe): processing_fn(dataframe)
Now the
test_processing_fn simply becomes an execution test, raising a
SchemaError if
processing_fn doesn’t add
column4 to the dataframe.
Strategies and Examples from Schema Models¶
You can also use the class-based API to generate examples. Here’s the equivalent schema model for the above examples:
from pandera.typing import Series, DataFrame class InSchema(pa.SchemaModel): column1: Series[int] = pa.Field(eq=10) column2: Series[float] = pa.Field(eq=0.25) column3: Series[str] = pa.Field(eq="foo") class OutSchema(InSchema): column4: Series[float] @pa.check_types def processing_fn(df: DataFrame[InSchema]) -> DataFrame[OutSchema]: return df.assign(column4=df.column1 * df.column2) @hypothesis.given(InSchema.strategy(size=5)) def test_processing_fn(dataframe): processing_fn(dataframe)
Checks as Constraints¶
As you may have noticed in the first example,
Check s
further constrain the data synthesized from a strategy. Without checks, the
example method would simply generate any value of the specified type. You
can specify multiple checks on a column and
pandera should be able to
generate valid data under those constraints.
schema_multiple_checks = pa.DataFrameSchema({ "column1": pa.Column( float, checks=[ pa.Check.gt(0), pa.Check.lt(1e10), pa.Check.notin([-100, -10, 0]), ] ) }) for _ in range(100): # generate 10 rows of the dataframe sample_data = schema_multiple_checks.example(size=10) # validate the sampled data schema_multiple_checks(sample_data)
One caveat here is that it’s up to you to define a set of checks that are
jointly satisfiable. If not, an
Unsatisfiable exception will be raised:
schema_multiple_checks = pa.DataFrameSchema({ "column1": pa.Column( float, checks=[ # nonsensical constraints pa.Check.gt(0), pa.Check.lt(-10), ] ) }) schema_multiple_checks.example(size=10)
Traceback (most recent call last): ... Unsatisfiable: Unable to satisfy assumptions of hypothesis example_generating_inner_function.
Check Strategy Chaining¶
If you specify multiple checks for a particular column, this is what happens under the hood:
The first check in the list is the base strategy, which
hypothesisuses to generate data.
All subsequent checks filter the values generated by the previous strategy such that it fulfills the constraints of current check.
To optimize efficiency of the data-generation procedure, make sure to specify the most restrictive constraint of a column as the base strategy and build other constraints on top of it.
In-line Custom Checks¶
One of the strengths of
pandera is its flexibility with regard to defining
custom checks on the fly:
schema_inline_check = pa.DataFrameSchema({ "col": pa.Column(str, pa.Check(lambda s: s.isin({"foo", "bar"}))) })
One of the disadvantages of this is that the fallback strategy is to simply
apply the check to the generated data, which can be highly inefficient. In this
case,
hypothesis will generate strings and try to find examples of strings
that are in the set
{"foo", "bar"}, which will be very slow and most likely
raise an
Unsatisfiable exception. To get around this limitation, you can
register custom checks and define strategies that correspond to them.
Defining Custom Strategies¶
All built-in
Check s are associated with a data
synthesis strategy. You can define your own data synthesis strategies by using
the extensions API to register a custom check function with
a corresponding strategy. | https://pandera.readthedocs.io/en/v0.7.0/data_synthesis_strategies.html | 2022-08-08T04:28:53 | CC-MAIN-2022-33 | 1659882570765.6 | [] | pandera.readthedocs.io |
You are looking at documentation for an older release. Not what you want? See the current release documentation..
Click Edit Layout corresponding to your desired site on the Manage Sites panel;
Or, click→ → on the top navigation bar.
The Edit Layout form will display..
For more details on how to edit elements, see the Editing a specific portlet section.
Click Edit Navigation corresponding to your desired site on Manage Sites form;
Or, click→ → on the top navigation bar.
The Navigation Management form appears.
For more information about actions, which can be done in the Navigation Management form, see the Managing navigations section..
For more details on these fields, refer to the Creating a new site section. | https://docs-old.exoplatform.org/public/topic/PLF50/PLFUserGuide.AdministeringeXoPlatform.ManagingSites.EditingSite.html | 2022-08-08T05:03:18 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs-old.exoplatform.org |
Below..
:
.
Step 3. Edit Email Settings.
Contact Form Shortcodes
Create a contact form using the shortcode.
[contact_form]
-.
Contact form with custom options:
[contact_form to="[email protected]" subject="Message Subject" thankyou="Thank you message" button="Text for Button" captcha="yes" fields="field-1,field-2,field-3"] | https://docs.wedesignthemes.com/dt_articles/how-to-add-a-contact-form-7/ | 2022-08-08T03:20:59 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['https://docs.wedesignthemes.com/wp-content/uploads/2021/05/form7-01.jpg',
None], dtype=object)
array(['https://docs.wedesignthemes.com/wp-content/uploads/2021/05/form7-02.jpg',
None], dtype=object)
array(['https://docs.wedesignthemes.com/wp-content/uploads/2021/05/form7-03.jpg',
None], dtype=object)
array(['https://docs.wedesignthemes.com/wp-content/uploads/2021/05/form7-04.jpg',
None], dtype=object)
array(['https://docs.wedesignthemes.com/wp-content/uploads/2021/05/form7-05.jpg',
None], dtype=object)
array(['https://docs.wedesignthemes.com/wp-content/uploads/2021/05/form7-06.jpg',
None], dtype=object)
array(['https://docs.wedesignthemes.com/wp-content/uploads/2021/05/form7-07.jpg',
None], dtype=object) ] | docs.wedesignthemes.com |
Support Diagnostics role
Applies to: Exchange Server 2013
The
Support Diagnostics management role enables administrators to perform advanced diagnostics under the direction of Microsoft Customer Service and Support in an organization.
Warning
This role grants permissions to cmdlets and scripts that should only be used under the direction of Customer Service and Support.. | https://docs.microsoft.com/en-us/exchange/support-diagnostics-role-exchange-2013-help | 2022-08-08T06:07:21 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.microsoft.com |
Encryption of Data at Rest
AWS provides the tools for you to create an encrypted file system that encrypts all of your data and metadata at rest using an industry standard AES-256 encryption algorithm . An encrypted file system is designed to handle encryption and decryption automatically and transparently, so you don’t have to modify your applications. If your organization is subject to corporate or regulatory policies that require encryption of data and metadata at rest, we recommend that you create an encrypted file system.
Topics | https://docs.aws.amazon.com/whitepapers/latest/efs-encrypted-file-systems/encryption-of-data-at-rest.html | 2022-08-08T05:54:11 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.aws.amazon.com |
Deploying an Update Server
From Joomla! Documentation
Introduction
This tutorial is designed to teach developers how to create an update server for integration with the update system introduced in Joomla!. By adding an update server listing to your extension's manifest, developers enable users to update their extensions via the Extension Manager's Update (see Joomla 3.x helpscreen) view with only a few clicks.
Defining an update server
In order to use this feature, an update server must be defined in your extension's manifest. This definition can be used in all Joomla! 2.5 and newer compatible extensions but is not available for templates. You can use two options for your server type: collection or extension. These will be explained in detail shortly. This code should be added to the extension manifest file, within the root extension element. The update server is defined as follows for each type:
<extension> <...> <updateservers> <server type="collection"></server> <server type="extension" priority="2" name="My Extension's Updates"></server> </updateservers> </extension>
Multiple servers can be defined within the <updateservers> tag. If you have more than one update server, you can set a different priority for each. In that way you can control the order in which the update servers are checked.! 3.9.6 release:
<updates> <update> <name>Joomla! 3.9</name> <description>Joomla! 3.9 CMS</description> <element>joomla</element> <type>file</type> <version>3.9.6</version> <infourl title="Joomla!"></infourl> <downloads> <downloadurl type="full" format="zip"></downloadurl> <downloadsource type="full" format="zip"></downloadsource> <downloadsource type="full" format="zip"></downloadsource> <></maintainerurl> <section>STS</section> <targetplatform name="joomla" version="3.[789]" /> <php_minimum>5.3.10</php_minimum> </update> </updates>). For plugins, this needs to be same as plugin attribute value for main file in plugin manifest. For <filename plugin="pluginname">pluginname.php</filename>, element value should be pluginname.
- type – The type of extension (component, module, plugin, etc.) (required)
- folder – Specific to plugins, this tag describes the type of plugin being updated (content, system, etc.) (required for plugins)
- client –")
- Warning: The tag name is <client> for Joomla! 2.5 and <client_id> for 1.6 and 1.7. If you use <client_id> (rather than <client>) on a 2.5 site, it will be ignored.
-.)
-
- All other tags are currently ignored. If you provide more than one tag containing one of the aforementioned stability keywords only the LAST tag will be taken into account. (from
this is also used to detect extension compatibility for the Joomla Update component), requires the following elementsː
- name – The name of the platform dependency; as of this writing, it should ONLY be "joomla"
- version – The version of Joomla! the extension supports
- min_dev_level and max_dev_level – These attributes were added in 3.0.1 to allow you to select a target platform based on the developer level ("z" in x.y.z). They are optional. You can specify either one or both. If omitted, all developer levels are matched. For example, the following matches versions 4.0.0 and 4.0.1.
<targetplatform name="joomla" version="4.0" min_dev_level="0" max_dev_level="1"/>
- Note: If your extension is Joomla! 2.5 and/or 3.1 compatible, you will be required to have separate <update> definitions for each version due to the manner in which the updater checks the version if you specify a number. However to show your extension on all Joomla versions that support automatic updates (and thus mark as compatible with all future unreleased versions of Joomla in Joomla Update) add
<targetplatform name="joomla" version=".*"/>. If you want your extension to show on all
versions then rather than specifying a version in the version tag add in
<targetplatform name="joomla" version="3.[012345]"/>. This will show the update to all 3.x versions from version 3.0 to 3.5. If you want to include version 3.10 you can use an
|like
- SQL update script is not executed during update.
- If the SQL update script (for example, in the folder. | https://docs.joomla.org/Deploying_an_Update_Server/en | 2022-08-08T03:32:16 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.joomla.org |
Fixes
- Upgraded log4j to 2.15.0 to mitigate the security vulnerability CVE-2021-44228. 605
Recommended Java versions
- Log4j 2.15.0, which fixes the security vulnerability CVE-2021-44228, is only compatible with Java 8+. Therefore, this version of the agent is not compatible with Java 7 and is only recommended if you are using Java 8+ and are otherwise unable to upgrade to Java agent 7.4.1.
Mitigation for Java 7
Java agent versions 4.12.0 through 6.5.0 (which support Java 7) use Log4j 2.11.2 which falls into the affected range. For Java 7 users the recommended mitigation from Apache Log4j Security Vulnerabilities is to set the system property
-Dlog4j2.formatMsgNoLookups=true.
Mitigation: In releases >=2.10, this behavior can be mitigated by setting the system property
log4j2.formatMsgNoLookups. For releases >=2.7 and <=2.14.1, all
PatternLayoutpatterns can be modified to specify the message converter as
%m{nolookups}instead of just
%m. For releases >=2.0-beta9 and <=2.10.0, the mitigation is to remove the
JndiLookupclass from the classpath:
zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class
Note: The alternate approach of defining the
LOG4J_FORMAT_MSG_NO_LOOKUPS=true environment variable will not work with the NR Java Agent.
Support statement:
- New Relic recommends that you upgrade the agent regularly to ensure that you're getting the latest features and performance benefits. Additionally, older releases will no longer be supported when they reach end-of-life. | https://docs.newrelic.com/docs/release-notes/agent-release-notes/java-release-notes/java-agent-651/ | 2022-08-08T03:45:23 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.newrelic.com |
Matchlists
Matchlists are lists of information that can be used to check whether a transaction and/or entity contains certain text.
For example, a model (rule) can be created to flag transactions with an IP that is in the "fraudulent IP addresses" matchlist. A rule can be created to find all entities that transact with the last name "Arnapoulos", "Digne", "Horvath", all last names in a list called "Board_Members_Names".
Matchlists are sometimes referred to as "allowlists", "blacklists" and "whitelists,".
Fundamentally, matchlists can be made of any text data. Most matchlists contain multiple fields. For example, a
user matchlist contains information about not only a user's
name, but also optional fields like
SSN,
DOB, and
ZIP.
If you want to check a list against multiple object types, you can also use the
string type. A string matchlist checks whether the text matches any of an event's fields, including its senders, receivers, and instruments.
The types of matchlists page provides examples of each type.
From the dashboard, you can view matchlists from the Matchlists pane.
Select a matchlist from the table to view its data:
Updated about 1 month ago | https://docs.unit21.ai/u21/docs/denylists | 2022-08-08T04:07:04 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.unit21.ai |
Within the Ok Alone system, you can create worker groups. We have changed the way worker groups are created to make it an easier, more streamlined process. Please follow these step by step instructions on how to create new groups and add workers to them.
How to Access the Groups Page
Log in to the dashboard at my.okalone.net
Click on the new Groups icon on the left hand menu.
Read the instructions at the top of the page under the heading Worker Groups.
How to create a new Group
Type the name of the group you want to create into the box Add a Group on the right hand side of the screen. Click Create the Group.
The name of the new group should appear at the bottom of the screen. To choose settings for the new group click the Manage Group button.
A new page with the new group name at the top should open.
Updating Group Settings
Groups can have a number of optional settings. These are kept by the group and can by used to easily update workers added to the group, or update all workers at once.
Click the Check in Frequency box and select the length of time for check ins from the drop down menu.
Click the Monitor box and select who you would like from the drop down menu (you can select more than one person if required).
Read the information about SMS Alerts. Click on the box and select the number of minutes between each alert sent out from the drop down menu.
Click on SMS Alert Count and select the number of times alerts should be sent from the drop down menu.
Read the information about Call alerts. Click on the box and select the number of minutes between each call from the drop down menu.
Click Call Alert Count and select the number of times calls should be made from the drop down menu.
Select Update all Workers with these Settings if you would like them to apply to all the workers you will be entering into the group. Click Save group settings.
A pop up will ask if you wish to apply your setting choices to all workers in the new group. Click whichever option applies.
You have now created a new group! You just need to populate it with workers. Scroll back up to the top of the dashboard screen and click Worker on the left hand menu.
Adding Workers to the Group
Click View Workers from the drop down menu.
Click Group Management on the right hand of the screen.
Select your new group from Choose a Group on the drop down menu.
Scroll down your dashboard screen to see all your workers. They must be in list formation, not grid for you to be able to select them. Click on the name of the worker you want to add to the new group (not Edit). The whole row of any worker chosen should turn green, to show it is selected. Click as many workers as you wish to add to the new group.
Scroll back to the top of the page and click Add Workers to Group (this should still show the name of the new group they are being added to).
Synchronising Worker Settings
This will take you back to the page for this specific group. Any workers whose information does not match that previously set for the new group will show in red.
If they are not to be changed leave them as they are. If the settings need to be the same as the other group members click Sync at the left hand end of their row.
Alternatively, if all workers need to be changed to the new settings, click Synchronise All Now to amend them.
All workers now have the same Monitor, Check in, SMS Alert and Call Alert settings.
Note – If the Update All workers with these Settings box is ticked a pop up will ask if you are sure you want to apply the changes to all the workers in the group. If the Update all Workers with these Settings box is not ticked when changes are saved, then the worker information will turn red to show it is not uniform across the group. This again can be changed by clicking Sync at the beginning of the row or Synchronise All Now.
Now when you go to the Groups page it will show you which groups you have created and who is in them.
The View your Workers page will also let you see which groups people have been allocated to.
| https://docs.okalone.net/worker-groups/ | 2022-08-08T03:25:55 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-8.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/09/groups-add-a-group-1024x312.jpg',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/09/worker-group-2-1024x235.jpg',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-11.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-12.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-13.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-14.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-15.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-16.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-17.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-18.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-19.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-20.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-21.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-22.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-23.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-24.png',
None], dtype=object)
array(['https://docs.okalone.net/wp-content/uploads/2020/08/image-25.png',
None], dtype=object) ] | docs.okalone.net |
Session Recording web player
OverviewOverview
The web player lets you use a web browser to view and play back recordings. Using the web player, you can:
Search for recordings by using filters, including host name, client name, user name, application, client IP address, event text, event type, and time.
View and play back both live and completed recordings with tagged events listed in the right pane.
configure cache memory for storing recordings while playing.
Record idle events and highlight idle periods.
Leave comments about a recording and set comment severities.
Share URLs of recordings.
Note:
Supported browsers include Google Chrome, Microsoft Edge, and Firefox.
Enable the web playerEnable the web player
The web player is enabled by default.
To disable the web player, start a Windows command prompt and run the
<Session Recording Server installation path>\Bin\SsRecUtils.exe –disablewebplayercommand.
To enable the web player, start a Windows command prompt and run the
<Session Recording Server installation path>\Bin\SsRecUtils.exe -enablewebplayercommand.
Logon and passwordLogon and password
The URL of the web player website is
http(s)://<FQDN of Session Recording Server>/WebPlayer. To ensure the use of HTTPS, add an SSL binding to the website in IIS and update the
SsRecWebSocketServer.config configuration file. For more information, see the HTTPS configuration section in this article.
Note:
When logging on to the web player website, domain users do not need to enter credentials while non-domain users must.
InstallationInstallation
Install the web player on the Session Recording Server only. Double-click SessionRecordingWebPlayer.msi and follow the instructions to complete your installation. For more information about installing Session Recording, see Install, upgrade, and uninstall.
Starting from Version 2103, Session Recording migrates the WebSocket server to IIS. With the web player installed, the SessionRecordingRestApiService, SessionRecordingWebStreaming, and WebPlayer applications appear in IIS.
A fresh installation of Session Recording 2103 and later connects your web browser to the WebSocket server hosted in IIS when you access the web player website. The WebSocket server hosted in IIS is versioned 2.0, as indicated by the registry value WebSocketServerVersion under the registry key at HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\SmartAuditor\Server.
An upgrade installation from an earlier version to Session Recording 2103 and later connects your web browser to the Python-based WebSocket server. To connect to the WebSocket server hosted in IIS, run the <Session Recording Server installation path>\Bin\SsRecUtils.exe -enablestreamingservice command. To connect back to the Python-based WebSocket server, run the <Session Recording Server installation path>\Bin\SsRecUtils.exe - disablestreamingservice command. The Python-based WebSocket server is versioned 1.0.
HTTPS configurationHTTPS configuration
To use HTTPS to access the web player website:
Add an SSL binding in IIS.
Obtain an SSL certificate in PEM format from a trusted Certificate Authority (CA).
Note:
Most popular browsers such as Google Chrome and Firefox no longer support the common name in a Certificate Signing Request (CSR). They enforce Subject Alternative Name (SAN) in all publicly trusted certificates. To use the web player over HTTPS, take the following actions accordingly:
When a single Session Recording Server is in use, update the certificate of the Session Recording Server to a SAN certificate.
When load balancing is in use, ensure that a SAN certificate is available both on Citrix ADC and on each Session Recording Server..configconfiguration file.
Locate and open the
SsRecWebSocketServer.configconfiguration file.
The
SsRecWebSocketServer.configconfiguration file is typically located in the
<Session Recording Server installation path>\Bin\folder.
(Optional) For Session Recording 2103 and later that host the WebSocket server in IIS, enable TLS by editing TLSEnable=1 and ignore the ServerPort, SSLCert, and SSLKey fields.
(Optional) For Session Recording 2012 and earlier, the] <!--NeedCopy-->
Enter the import password that you created when exporting the .pfx file.
Run the following command to extract the private key:
openssl pkcs12 -in [yourfile.pfx] -nocerts -out [newaSRS2keyWithPassword.pem] <!--NeedCopy-->] <!--NeedCopy-->
Save your changes.
Check your firewall settings. Allow SsRecWebSocketServer.exe to use the TCP port (22334 by default) and allow access to the web player URL.
Run the
SsRecUtils –stopwebsocketservercommand.
View recordingsView recordings
After you log on, the web player home page might hide or show content based on whether the following option is selected in Session Recording Server Properties.
With the option selected, the web player home page hides all content. Recordings can be accessed only by way of their URLs. Recording URLs are provided in email alerts that are sent to specified recipients. For information about email alerts, see Event response policies. You can also share recording URLs through the Share Current Playback control on recording playback pages. See descriptions later in this article.
With the option unselected, the web player home page shows content similar to the following screen capture. Click All Recordings in the left navigation to refresh the page and display new recordings if there are any. Scroll down the webpage to select recordings to view or use filters to customize your search results. For live recordings, the Duration column shows Live and the play button appears green.
To show all recording files of a recorded session, select a recording on the list and click the Follow up icon. The Follow up icon is available only when a recording is selected., February 23, 2021 and 11:10:58.
- The duration of the recording in playback. In this example, 00:07:32.
- The number of events in the recording. In this example, 11 EVENTS.
- The name of the user whose session was recorded.
- The host name of the VDA where the recorded session was hosted.
- The name of the client device where the session was running.
- Options for sorting search results: Select Sort by All Categories, Sort by Events, or Sort by Comments to sort search, click the slider to set up the cache memory for storing recordings while playing.
Tip:
You can access the Configuration page directly through http(s)://<FQDN of Session Recording Server>/WebPlayer/#/configuration/cache.
Record idle events and highlight idle periods
Session Recording can record idle events and highlight idle periods in the Session Recording web player. Idle events are not visible in the Session Recording Player because idle events are saved in the Session Recording Database but not in the relevant recording files (
.icl files).
To customize the idle event feature, set the following registry keys as required. The registry keys are located at HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\SmartAuditor\SessionEvents.
Comment on recordings
When a recorded session is being played, you can click the Comments player control to leave comments and set comment severities. Comments of different severities are displayed in different colors in the right event list panel. Severities include Normal, Medium, and Severe. During session playback, you can view all comments about a recording and delete comments from the event list. Refresh the webpage before being able to delete a comment you just left.
Clicking a comment in the event list lets you jump to the location where the comment was given. Clicking the comment icon in the upper left corner redirects you to the My comments page where all your comments are presented.
Note:
To make the comment feature work as expected, clear the WebDAV Publishing check box in the Add Roles and Features wizard of Server Manager on the Session Recording Server.
Share URLs of recordings
Clicking Share Current Playback on the playback page of a recording copies the recording URL to the clipboard. You can share the URL with other users for them to access the recording directly without the need to search in all recordings.
After you click Share Current Playback, either of the following messages appears, indicating a successful or failed operation respectively:
The URL to the shared recording has been copied to the clipboard
Sharing the recording URL failed
Pasting the shared URL in the address bar lets you jump to the location where the URL was copied.
For secure sharing, set the following registry values under
HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\SmartAuditor\Server:
Administrator Logging integrated with the web player
The web player integrates the Administrator Logging webpage. An administrator assigned to both the LoggingReader and the Player roles can view the administrator activity logs in the web player.
Note:
The language set for the web player browser must match the language you selected when you installed the Session Recording Administration components.
Configuration logging:
Recording reason logging:
Ensure that your SessionRecordingLoggingWebApplication site in IIS and the web player have the same SSL settings. Otherwise, 403 errors occur when you request to access the administrator activity logs.
| https://docs.citrix.com/en-us/session-recording/2107/view-recordings/session-recording-web-player.html?lang-switch=true | 2022-08-08T03:32:32 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['/en-us/session-recording/2107/media/apps-hosted-on-iis.png',
'Applications hosted in IIS'], dtype=object)
array(['/en-us/session-recording/2107/media/websocket-server-version.png',
'WebSocket Server Version'], dtype=object)
array(['/en-us/session-recording/2107/media/hide-or-show-content-on-the-web-player-home-page.png',
'Hide or show content on the web player home page'], dtype=object)
array(['/en-us/session-recording/2107/media/follow-up-icon.png',
'Follow up icon'], dtype=object)
array(['/en-us/session-recording/2107/media/recording-search-filters.png',
'Recording search filters'], dtype=object)
array(['/en-us/session-recording/2107/media/host-name-filter-selected.png',
'The host name filter'], dtype=object)
array(['/en-us/session-recording/2107/media/all-filters-listed1.png',
'All filters listed'], dtype=object)
array(['/en-us/session-recording/2107/media/plus-symbol-1.png',
'The plus symbol'], dtype=object)
array(['/en-us/session-recording/2107/media/time-filter-added-new.png',
'Add the time filter'], dtype=object)
array(['/en-us/session-recording/2107/media/play-buttons2.png',
'Play button'], dtype=object)
array(['/en-us/session-recording/2107/media/playback-page.png',
'Recording playback page'], dtype=object)
array(['/en-us/session-recording/2107/media/event-details.png',
'The right pane of the playback page'], dtype=object)
array(['/en-us/session-recording/2107/media/cache-memory-while-playing.png',
'Configure cache memory for storing recordings while playing'],
dtype=object)
array(['/en-us/session-recording/2107/media/comments-list.png',
'Comments on a recording'], dtype=object)
array(['/en-us/session-recording/2107/media/my-comments-page.png',
'My comments page'], dtype=object)
array(['/en-us/session-recording/2107/media/configuration-in-server-manager.png',
'Image of configuration in server manager'], dtype=object)
array(['/en-us/session-recording/2107/media/share-playack-url.png',
'Share current playback'], dtype=object)
array(['/en-us/session-recording/2107/media/session-recording-logging-web-application.png',
'Image of Session Recording Logging Web Application'], dtype=object) ] | docs.citrix.com |
Connect 108.2.2211¶
Release Date: 6/29/2022
Fixed¶
OM-54924 Show logging UI only when Logging is enabled in settings
OM-54531 Batch export writes linked model files to divergent paths
OM-54009 Exception on prop export when cameras unchecked
OM-53686 Provide clear guidance to users on known (repetitive) issues
OM-53335 Small fixes
OM-53136 Revit 3D Views don’t match same cameras in Create
OM-53104 Family data instancing option not remembered between Revit sessions
OM-52768 User should be notified that Revit Material Missing from Family
OM-52153 Instances sharing same materials referencing first-created Looks folder under FamilyData
OM-47285, OM-36112 Export Revit 3D Views as separate files but not as cameras
OM-47282 Convert test function into batch export
OM-36421 Clean up BIM Data and file structure
Added¶
You can now export all or a subset Revit files within a folder (up to 3 levels of depth from the root folder) as props or projects. The hierarchy of the folders and files is preserved in the export.
To use this feature, hit the new Batch Export button:
Select either as Projects or as Props depending on the intended result.
Select the root folder containing the files you would like to export.
Check / Uncheck the files you want to export / not export respectively.
Specify the output location you would like to export these files to and the batch export begins.
Feature Notes¶
Revit files which are lower than the current version will be automatically upgraded
User will be required to click through Revit UI / popups as they appear, including workset selection, errors.
Export By Selected Views¶
With this setting checked, users can export views within a single Revit files as separate USDs. Depending on whether the user has selected prop or project export, the USDs are combined into a single project or not.
This feature is intended for models whose elements are separated by views showing worksets, phases, or other forms of categorization set by the user.
Similar to the Batch export UI, users can select which views to export. For example, this Brownstone model has views separating interior, exterior and FF&E elements:
When exported as a project to Omniverse, elements in views are exported as separate USD files and combined into a single project (viewed in Create):
Feature Notes¶
If an element is shown in multiple views, it will be exported to each USD representing that view.
Names of exported views take the provided name of the file in the export UI + the name of the view. | https://docs.omniverse.nvidia.com/con_connect/con_connect/revit_release-notes/108_2_2211.html | 2022-08-08T04:30:42 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['../../_images/revit_release-notes_108_2_2211_1Ribbon.png',
'../../_images/revit_release-notes_108_2_2211_1Ribbon.png'],
dtype=object)
array(['../../_images/revit_release-notes_108_2_2211_2BatchButton.png',
'../../_images/revit_release-notes_108_2_2211_2BatchButton.png'],
dtype=object)
array(['../../_images/revit_release-notes_108_2_2211_3BatchSteps1.png',
'../../_images/revit_release-notes_108_2_2211_3BatchSteps1.png'],
dtype=object)
array(['../../_images/revit_release-notes_108_2_2211_3BatchSteps2.png',
'../../_images/revit_release-notes_108_2_2211_3BatchSteps2.png'],
dtype=object)
array(['../../_images/revit_release-notes_108_2_2211_4ExportBySelectedViews.png',
'../../_images/revit_release-notes_108_2_2211_4ExportBySelectedViews.png'],
dtype=object)
array(['../../_images/revit_release-notes_108_2_2211_6SelectedViewsStep1.png',
'../../_images/revit_release-notes_108_2_2211_6SelectedViewsStep1.png'],
dtype=object)
array(['../../_images/revit_release-notes_108_2_2211_6SelectedViewsStep2.gif',
'../../_images/revit_release-notes_108_2_2211_6SelectedViewsStep2.gif'],
dtype=object) ] | docs.omniverse.nvidia.com |
The Vigilante Bomb KAB500 Missile is fully rigged and has animated fins. This vehicle is DIS/HLA (RPR FOM) Integration ready. Designed for simulations.
The Vigilante Bomb KAB500 is an electro-optical TV-guided fire and forget bomb with armor-piercing warhead capable of penetrating up to 1.5 meters (5 ft) of reinforced concrete. This asset blueprint is fully rigged and includes animated | https://docs.unrealengine.com/marketplace/ja/product/bomb-kab500-east?lang=ja | 2022-08-08T05:11:02 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.unrealengine.com |
Evaluating Recommender Output
LensKit’s evaluation support is based on post-processing the output of recommenders and predictors. The batch utilities provide support for generating these outputs.
We generally recommend using Jupyter notebooks for evaluation.
When writing recommender system evaluation results for publication, it’s important to be precise about how exactly your metrics are being computed [TDV21]; to aid with that, each metric function’s documentation includes a mathematical definition of the metric.
Evaluation Topics
Saving and Loading Outputs
In our own experiments, we typically store the output of recommendation runs in LensKit experiments in CSV or Parquet files, along with whatever parameters are relevant from the configuration. | https://lkpy.readthedocs.io/en/stable/evaluation/index.html | 2022-08-08T05:22:24 | CC-MAIN-2022-33 | 1659882570765.6 | [] | lkpy.readthedocs.io |
Introducing the Visual Studio ALM Rangers – Mathias Olausson
This post is part of an ongoing series of Rangers introductions. See An index to all Rangers covered on this blog for more details.
Who you are?
I work as an ALM consultant, where I focus on software architecture and improving software development processes.
What makes you “tick”?
I’ve been in the software business for a long time now. What I really like is working software. As a developer, trainer and mentor I get to meet lots of people and see lots of projects. When I get to use my experience to help someone build better software it really makes my day.
I’m also involved in local communities, such as the Swedish .NET community SweNug, I think these communities are nice ways for people to meet and talk about experiences and look at new stuff.
Where you live?
I live in the small town of Lerum on the Swedish west-coast with my wife and two kids.
Where is the place you call home?
Although I travel quite a lot, home is where I live.
Why are you active in the Rangers program?
I think the Rangers program can help people adopt Microsoft technologies in a very practical, hands-on way. Personally I enjoy sharing whatever knowledge I’ve gained and the Rangers projects have been great for doing so.
What is the best Rangers project you worked in and why?
My favorite engagement with the Rangers has been in the TFS Integration Platform. This project was very active and we had great interactions with both the Microsoft team and the other Rangers. I think this project is a good example of how the TFS platform can be extended and I believe we will see a plethora of adapters on the market allowing integration with TFS from all sorts of tools. | https://docs.microsoft.com/en-us/archive/blogs/willy-peter_schaub/introducing-the-visual-studio-alm-rangers-mathias-olausson | 2022-08-08T03:45:13 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.microsoft.com |
Overwrite path
Let’s say you want to change the path of your page views (and events). You have profiles of users on your website (), but don’t want to see the profile name in your dashboard. But you do want to record that profiles did get visits.
This is how you set up a
myPathOverwriter-function for that usecase:
<script> function myPathOverwriter({ path }) { if (path.startsWith("/profiles/"))</script>
You can specify the path callback function via
data-path-overwriter, in the example above, you specify the
myPathOverwriter-function. The function gets an object as argument in which you find the
path key. If the function errors, returns or falsy value, we keep the original path.
If you want to omit those page views completely, you can use our ignore pages feature. | https://docs.simpleanalytics.com/overwrite-path | 2022-08-08T04:02:55 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['/images/pencil.svg', 'edit'], dtype=object)] | docs.simpleanalytics.com |
public class ContentNegotiationManagerFactoryBean extends Object implements FactoryBean<ContentNegotiationManager>, ServletContextAware, InitializingBean
ContentNegotiationManagerand configure it with
ContentNegotiationStrategyinstances.
This factory offers properties that in turn result in configuring the underlying strategies. The table below shows the property names, their default settings, as well as the strategies that they help to configure:
Alternatively you can avoid use of the above convenience builder
methods and set the exact strategies to use via
setStrategies(List).
Deprecation Note: As of 5.2.4,
favorPathExtension and
ignoreUnknownPathExtensions
are deprecated in order to discourage using path extensions for content
negotiation and for request mapping with similar deprecations on
RequestMappingHandlerMapping. For further context, please read issue
#24719.
OBJECT_TYPE_ATTRIBUTE
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public ContentNegotiationManagerFactoryBean()
public void setStrategies(@Nullable List<ContentNegotiationStrategy> strategies)
Note: use of this method is mutually exclusive with use of all other setters in this class which customize a default, fixed set of strategies. See class level doc for more details.
strategies- the strategies to use
public void setFavorParameter(boolean favorParameter)
media type mappings.
By default this is set to
false.
setParameterName(java.lang.String)
public void setParameterName(String parameterName)
setFavorParameter(boolean)is on.
The default parameter name is
"format".
@Deprecated public void setFavorPathExtension(boolean favorPathExtension)
false. In 5.3 the default changes to
falseand use of this property becomes unnecessary.
By default this is set to
false in which case path extensions
have no impact on content negotiation.
public void setMediaTypes(Properties mediaTypes)
The
parameter strategy requires
such mappings in order to work while the
path extension strategy can fall back on lookups via
ServletContext.getMimeType(java.lang.String) and
MediaTypeFactory.
Note: Mappings registered here may be accessed via
ContentNegotiationManager.getMediaTypeMappings() and may be used
not only in the parameter and path extension strategies. For example,
with the Spring MVC config, e.g.
@EnableWebMvc or
<mvc:annotation-driven>, the media type mappings are also plugged
in to:
ResourceHttpRequestHandler.
ContentNegotiatingViewResolver.
mediaTypes- media type mappings
addMediaType(String, MediaType),
addMediaTypes(Map)
public void addMediaType(String key, MediaType mediaType)
setMediaTypes(java.util.Properties)for programmatic registrations.
public void addMediaTypes(@Nullable Map<String,MediaType> mediaTypes)
setMediaTypes(java.util.Properties)for programmatic registrations.
@Deprecated public void setIgnoreUnknownPathExtensions(boolean ignore)
falsewill result in an
HttpMediaTypeNotAcceptableExceptionif there is no match.
By default this is set to
true.
@Deprecated public void setUseJaf(boolean useJaf)
setUseRegisteredExtensionsOnly(boolean), which has reverse behavior.
public void setUseRegisteredExtensionsOnly(boolean useRegisteredExtensionsOnly)
favorPathExtensionor
setFavorParameter(boolean)is set, this property determines whether to use only registered
MediaTypemappings or to allow dynamic resolution, e.g. via
MediaTypeFactory.
By default this is not set in which case dynamic resolution is on.
public void setIgnoreAcceptHeader(boolean ignoreAcceptHeader)
By default this value is set to
false.
public void setDefaultContentType(MediaType contentType)
By default this is not set.
setDefaultContentTypeStrategy(org.springframework.web.accept.ContentNegotiationStrategy)
public void setDefaultContentTypes(List<MediaType> contentTypes)
By default this is not set.
setDefaultContentTypeStrategy(org.springframework.web.accept.ContentNegotiationStrategy)
public void setDefaultContentTypeStrategy(ContentNegotiationStrategy strategy)
ContentNegotiationStrategyto use to determine the content type to use when no content type is requested.
By default this is not set.
setDefaultContentType(org.springframework.http.MediaType)
public void setServletContext(ServletContext servlet
public ContentNegotiationManager build()
ContentNegotiationManagerinstance.
@Nullable<ContentNegotiationManager>
FactoryBean.getObject(),
SmartFactoryBean.isPrototype() | https://docs.spring.io/spring-framework/docs/5.3.19/javadoc-api/org/springframework/web/accept/ContentNegotiationManagerFactoryBean.html | 2022-08-08T04:37:39 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.spring.io |
Different types of virus/malware require different scan actions. Customizing scan actions requires knowledge about virus/malware and can be a tedious task. OfficeScan computers. ActiveAction settings are updated to protect against the latest threats and the latest methods of virus/malware attacks.
ActiveAction is not available for spyware/grayware scan.
The following table illustrates how ActiveAction handles each type of virus/malware:
For probable virus/malware, the default action is "Deny Access" during Real-time Scan and "Pass" during Manual Scan, Scheduled Scan, and Scan Now. If these are not your preferred actions, you can change them to Quarantine, Delete, or Rename. | https://docs.trendmicro.com/en-us/enterprise/control-manager-60/ch_policy_templates/osce_client/scan_types_manual_cnfg/scan_set_cmn_act_virus/scan_set_cmn_act_activeact.aspx | 2022-08-08T03:48:26 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.trendmicro.com |
, randprocs, problems.
[2]:.0 tmax = 20.0 y0 = np.array([20, 20]) ivp = problems.InitialValueProblem(t0=t0, tmax=tmax, y0=y0, f=f, df.
[3]:
prior = statespace.IBM( ordint=4, spatialdim=ivp.dimension, forward_implementation="sqrt", backward_implementation="sqrt", ) initrv = randvars.Normal(mean=np.zeros(prior.dimension), cov=np.eye(prior.dimension)) prior_process = randprocs.MarkovProcess(transition=prior, initrv=initrv, initarg=ivp.t0) ekf = diffeq.GaussianIVPFilter.string_to_measurement_model( "EK1", ivp=ivp, prior_process=prior_process ).
[4]:
diffmodel =statespace.PiecewiseConstantDiffusion(t0=t0) solver = diffeq.GaussianIVPFilter.construct_with_rk_init(ivp, prior_process=prior_process, measurement_model=ekf, diffusion_model=diffmodel, with_smoothing=True)
Now we can solve the ODE. To this end, define a
StepRule, e.g.
ConstantSteps or
AdaptiveSteps. If you don’t know which firststep to use, the function
propose_firststep makes an educated guess for you.
[5]:
firststep = diffeq.propose_firststep(ivp) steprule = diffeq.AdaptiveSteps(firststep=firststep, atol=1e-3, rtol=1e-5) # steprule = diffeq.
[6]:
evalgrid = np.arange(ivp.t0, ivp.tmax, step=0.1)
Done! This is the solution to the Lotka-Volterra model.
[7]:
sol = odesol(evalgrid) plt.plot(evalgrid, sol.mean, "o-", linewidth=1) plt.ylim((0, 30)) plt.show()
[ ]:
[ ]: | https://probnum.readthedocs.io/en/v0.1.9/tutorials/odes/odesolvers_from_scratch.html | 2022-08-08T03:32:04 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['../../_images/tutorials_odes_odesolvers_from_scratch_13_0.png',
'../../_images/tutorials_odes_odesolvers_from_scratch_13_0.png'],
dtype=object) ] | probnum.readthedocs.io |
Troubleshooting Systems Manager Run Command
Run Command provides status details with each command execution. For more information about the details of command statuses, see Understanding Command Statuses. You can also use the information in this topic to help troubleshoot problems with Run Command.
Topics
Where Are My Instances?
In the Run a command page, after you choose an SSM document to run and select Manually selecting instances in the Targets section, a list is displayed of instances you can choose to run the command on. If an instance you expect to see is not listed, check the following requirements:
SSM Agent: Make sure the latest version of SSM Agent is installed on the instance. Only Amazon EC2 Windows Amazon Machine Images (AMIs) and some Linux AMIs are pre-configured with SSM Agent. For information about installing or reinstalling SSM Agent on an instance, see Installing and Configuring SSM Agent on Amazon EC2 Linux Instances or Installing and Configuring SSM Agent on Windows Instances.
IAM instance role: Verify that the instance is configured with an AWS Identity and Access Management (IAM) role that enables the instance to communicate with the Systems Manager API. Also verify that your user account has an IAM user trust policy that enables your account to communicate with the Systems Manager API. For more information, see Create an IAM Instance Profile for Systems Manager.
Target operating system type: Double-check that you have selected an SSM document that supports the type of instance you want to update. Most SSM documents support both Windows and Linux instances, but some do not. For example, if you select the SSM document
AWS-InstallPowerShellModule, which applies only to Windows instances, you will not see Linux instances in the target instances list.
Getting Status Information on Windows Instances
Use the following command to get status details about one or more instances:
Get-SSMInstanceInformation -InstanceInformationFilterList @{Key="InstanceIds";ValueSet="
instance-ID","
instance-ID"}
Use the following command with no filters to see all instances registered to your account that are currently reporting an online status. Substitute the ValueSet="Online" with "ConnectionLost" or "Inactive" to view those statuses:
Get-SSMInstanceInformation -InstanceInformationFilterList @{Key="PingStatus";ValueSet="Online"}
Use the following command to see which instances are running the latest version of the EC2Config service. Substitute ValueSet="LATEST" with a specific version (for example, 3.0.54 or 3.10) to view those details:
Get-SSMInstanceInformation -InstanceInformationFilterList @{Key="AgentVersion";ValueSet="LATEST"}
Getting Status Information on Linux Instances
Use the following command to get status details about one or more instances:
aws ssm describe-instance-information --instance-information-filter-list key=InstanceIds,valueSet=
instance-ID
Use the following command with no filters to see all instances registered to your account that are currently reporting an online status. Substitute the ValueSet="Online" with "ConnectionLost" or "Inactive" to view those statuses:
aws ssm describe-instance-information --instance-information-filter-list key=PingStatus,valueSet=Online
Use the following command to see which instances are running the latest version of SSM Agent. Substitute ValueSet="LATEST" with a specific version (for example, 1.0.145 or 1.0) to view those details:
aws ssm describe-instance-information --instance-information-filter-list key=AgentVersion,valueSet=LATEST
If the describe-instance-information API operation returns an AgentStatus of Online, then your instance is ready to be managed using Run Command. If the status is Inactive, the instance has one or more of the following problems.
SSM Agent is not installed.
The instance does not have outbound internet connectivity.
The instance was not launched with an IAM role that enables it to communicate with the SSM API, or the permissions for the IAM role are not correct for Run Command. For more information, see Create an IAM Instance Profile for Systems Manager.
Troubleshooting SSM Agent
If you experience problems executing commands using Run Command, there might be a problem with SSM Agent. Use the following information to help you view SSM Agent log files and troubleshoot the agent.
View SSM Agent Log Files
SSM Agent logs information in the following files. The information in these files can help you troubleshoot problems.
Note
If you choose to view these logs by using Windows File Explorer, be sure to enable the viewing of hidden files and system files in Folder Options.
On Windows
%PROGRAMDATA%\Amazon\SSM\Logs\amazon-ssm-agent.log
%PROGRAMDATA%\Amazon\SSM\Logs\errors.log
On Linux
/var/log/amazon/ssm/amazon-ssm-agent.log
/var/log/amazon/ssm/errors.log
Enable SSM Agent Debug Logging
Use the follow procedure to enable SSM Agent debug logging on Windows Server and Linux managed instances.
Either use Systems Manager Session Manager to connect to the instance where you want to enable debug logging, or log on to the managed instance. For more information, see Working with Session Manager.
Make a copy of the seelog.xml.template file. Change the name of the copy to seelog.xml. The file is located in the following directory:
Windows Server: %PROGRAMFILES%\Amazon\SSM\seelog.xml.template
Linux: /etc/amazon/ssm/seelog.xml.template
Edit the
seelog.xmlfile to change the default logging behavior. Change the value of minlevel from info to debug, as shown in the following example.
<seelog type="adaptive" mininterval="2000000" maxinterval="100000000" critmsgcount="500" minlevel="debug">
Windows only: Locate the following entry:
filename="{{LOCALAPPDATA}}\Amazon\SSM\Logs\amazon-ssm-agent.log"
Change this entry to use the following path:
filename="C:\ProgramData\Amazon\SSM\Logs\amazon-ssm-agent.log"
Windows only: Locate the following entry:
filename="{{LOCALAPPDATA}}\Amazon\SSM\Logs\errors.log"
Change this entry to use the following path:
filename="C:\ProgramData\Amazon\SSM\Logs\errors.log"
Restart SSM Agent.
Windows Server: Use Windows Services Manager to restart the Amazon SSM Agent.
Linux: Run the following command:
sudo restart amazon-ssm-agent | https://docs.aws.amazon.com/systems-manager/latest/userguide/troubleshooting-remote-commands.html | 2019-07-16T04:55:09 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.aws.amazon.com |
Face Privacy Filter Release Notes¶
0.3.4¶
- Clean up tutorial documentation naming and remove deprecated swagger demo app
- Standardize demo CSS, add region drawing to demo page
0.3.3¶
- Clean up documentation for install and parameter descriptions
- Add documentation and functionality for environment variables in push request
0.3.2¶
- Minor updates to web JS demo pages for pending recognition model
- Type Change rename input and output types to region monikers to better reflect target
0.3.1¶
- Update model to use single image as input type
- Update javascript demo to run with better CORS behavior (github htmlpreview)
- Additional documentation for environmental variables
- Simplify operation for active prediction to use created model (no save+load required)
0.2.3¶
- Documentation and package update to use install instructions instead of installing this package directly into a user’s environment.
- License addition
0.2.2¶
- Refactor documentation into sections and tutorials.
- Create this release notes document for better version understanding.
0.2.1¶
- Refactor to remote the demo
binscripts and rewire for direct call of the script
filter_image.pyas the primary interaction mechanism.
0.2.0¶
- Refactor for compliant dataframe usage following primary client library examples for repeated columns (e.g. dataframes) instead of custom types that parsed rows individually.
- Refactor web, api, main model wrapper code for corresponding changes.
- Migration from previous library structure to new acumos client library
- Refactor to not need this library as a runtime/installed dependency | https://docs.acumos.org/en/athena/submodules/face-privacy-filter/docs/release-notes.html | 2019-07-16T04:09:11 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.acumos.org |