content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
ProkaryMetrics is a software package providing tools to analyze microbial biofilms through the 3D reconstruction of fluorescent microscopy image data and the manual annotation of bacteria. ProkaryMetrics allows full 3D interaction with such data sets and provides users with visual and quantitative analysis tools to gather quantitative measurements on individuals as well as the biofilm as a whole. Contents:
https://prokarymetrics.readthedocs.io/en/latest/
2018-03-17T06:39:55
CC-MAIN-2018-13
1521257644701.7
[]
prokarymetrics.readthedocs.io
Blog or Post Archive Sidebar Drops Below Content Problem Description When viewing the Blog page or a post category view, the sidebar drops below the content area. Cause This is caused either by incompatible spans being used in a custom template (ie span-9 and span-4 which exceeds the total of 12 possible columns) or you are using a child theme that has not been updated for compatibility with the new Grid in Layers 1.5. This can also happen if you set a static page as the Posts List in Settings → Reading. This option should be left blank or it replaces the template with a default WordPress loop that does not contain the correct grid wrapper. Solution Verify no static blog page is set - Go to→ - Set the Posts Page to the blank/default option - You can set a static Home page See Site or layouts broken after updating to 1.5
http://docs.layerswp.com/doc/blog-or-post-archive-sidebar-drops-below-content/
2018-03-17T06:06:01
CC-MAIN-2018-13
1521257644701.7
[]
docs.layerswp.com
Table of Contents Product Index Showing the love! These poses were designed to show off the affectionate side of the Toon Generations 2. Holding hands, piggy backing, flirting, fun. From teenage crush to the honeymoon phase, to long lasting love, these poses can help illustrate the affectionate story of the Toons as time goes by. Bring your renders to.
http://docs.daz3d.com/doku.php/public/read_me/index/36383/start
2021-10-16T02:18:07
CC-MAIN-2021-43
1634323583408.93
[]
docs.daz3d.com
Syncplicity for users Save PDF Selected topic Selected topic and subtopics All content Directory of Syncplicity documentation Managing files on Android Contextual actions on files for Android Adding photos, videos and audio files on Android Copying files on Android File locking on Android Stream videos on Android Creating and editing Microsoft Office files on Android Supported Microsoft Office features on Android Viewing an RM-protected file on Android Related Links
https://docs.axway.com/bundle/Syncplicity/page/managing_files_on_android.html
2021-10-16T02:55:38
CC-MAIN-2021-43
1634323583408.93
[]
docs.axway.com
What is a NDF file? A file wiht .ndf extension is a secondary database file used by Microsoft SQL Server to store user data. NDF is secondary storage file because SQL server stores user specified data in primary storage file known as MDF. NDF data file is optional and is user-defined to manage data storage in case the primary MDF file uses all the allocated space. It is usually stored on separate disk and can spread to multiple storage devices. The presence of MDF files is necessary in order to open NDF files. NDF File Format NDF file format is no different than MDF and uses pages as the fundamental unit of data storage. each page starts with 96 bytes header that includes: - Page ID - Type of Structure - Number of records in the pages - Pointers to previous and next pages NDF File Structure A MDF file has the following data structure. - Page 0: Header - Page 1: First PFS - Page 2: First GAM - Page 3: First SGAM - Page 4: Unused - Page 5: Unused - Page 6: First DCM - Page 7: First BCM NDF Data File Page Pages in a SQL Server data file start from zero (0) and increment sequentially. Each file is recognized by a unique file ID number. The file ID and page number pair uniquely identifies a page in a database. An example showing page numbers in a database, is as in following image. This example shows page numbers in a database that has a 4-MB primary data file and 1-MB secondary data file.
https://docs.fileformat.com/database/ndf/
2021-10-16T01:45:42
CC-MAIN-2021-43
1634323583408.93
[array(['../ndf.png', 'NDF Database File Format'], dtype=object)]
docs.fileformat.com
You are looking at an older version of this article! The This prepares the HTTP cache to be used. After clearing the cache, it can be warmed up in order to make for faster loading times with the first call to the frontend. After clearing the cache, the cache warmer will appear as a popup window..
https://docs.shopware.com/en/shopware-5-en/settings/cache-performance-module?version=1.0.0&category=shopware-5-en/settings
2021-10-16T02:22:10
CC-MAIN-2021-43
1634323583408.93
[]
docs.shopware.com
About AliTV¶ AliTV (Alignment Toolbox & Visualization) is a free software for visualizing whole genome alignments as circular or linear maps. The figure is presented on a HTML-page with an easy-to-use graphical user interface. Comparisons between multiple genomic regions can be generated, interactively analyzed and exported as svg. Installation¶ For viewing pre-computed visualizations (available as .json files) no installation is required. This file can be imported on the demo website . If you are interested in creating whole genome alignment visualizations for your own data please refer to the README of the AliTV-perl-interface . Citation¶ Please cite this article when using AliTV. The source code in any specific version can additionally be cited using the zenodo doi, latest: Tutorial I: Simple comparasion of chloroplast genomes¶ In this example chloroplast genomes of seven parasitic and non-parasitic plants are compared and analyzed with AliTV. The files needed for this tutorial are used as the default data, this means you need not import them. Visualizing the Whole-Genome Alignment¶ Open AliTV.htmlwith your favourite webbrowser. When you use AliTV the first time the chloroplast data are used as the default data. The AliTV image using the default configuration. As you can see seven chloroplast genomes and the alignments between adjacent genomes are visualized. The genome of N. tabacum is splitted in two parts. The phylogenetic tree represents the relationship between the analyzed genomes. Create figure¶ - For using the figure you have to download it. Click for saving the picture. - Open the folder where you saved the AliTV image and open it with your favourite image-viewer. Change the genome orientation¶ - The direction of S. americana is the opposite in contrast to the adjacent genomes, because the alignments between them are twisted. Therefore AliTV provides the option to set the sequence orientation forward or reverse in order to obtain a clearer comparasion. - A context menu appears by right-clicking the genome of S. americana. Select . - With one easy mouseclick you have changed the orientation of S. americana and can now analyse the alignments between this and the adjacent genomes. Set tree configurations¶ - In the current settings the phylogenetic tree of the analyzed plants is presented. AliTV provides easy options for changing the tree layout. - Choose the tab on the HTML page. - Here you can decide wether the tree is drawn or not. For the next few steps it is easier if the tree is not shown. Therefore make the checkbox unchecked. - Submit the changes with - Now the phylogenetic tree is hidden. You can easy show him by checking the checkbox again. Change the genome and chromosome order¶ - If you want to compare N. tabacum to L. philippensis AliTV provides an easy option for changing the genome order. - You can change the order by right-clicking N. tabacum. The context menu appears. Select . Now N. tabacum swapped its position with the genome below. So you can easy compare N. tabacum with L. philippensis - For changing the order of chromosomes within a genome (for example N. tabacum) you select the chromosome which you want to resort. In the contextmenu select or . - When you want to save the image with the current settings you can download it as described in Create Figure. Filter links by identity and length¶ - For a biological analysis it may be helpful to filter links by their identity or length. AliTV offers both options for analyzing the image easy and interactive. - Choose the tab on the HTML page. - For filtering links you use the sliders. Set the range of the identity slider on 85% to 100%. Submit your changes with . - As you can see some of the red and orange colored links are not shown because their identity is less than 85% and so they are filtered. - In the same way links can be filtered by their length. So try it and have fun with this nice sliders! Change graphical parameters¶ AliTV provides many ways to customise the image. The following list only shows a few examples. For more information checkout Features of AliTV or try it by yourself. Setting the layout¶ - Select and choose your favourite layout (circular or linear). - At the moment the circular layout is not the development stage of the linear layout. Therefore the most options and interactive functions are not working if you use it. Coloring the chromosomes¶ With AliTV it is possible to change the color range of the presented genomes, the color of features and labels. Select and define a new start and end color by using the color picker. It is also possible to type in the Hex or RGB value of your favourite color. If you use #00ffc2 for color 1 and #ff8a00 for color 2 you get the following crazy AliTV image. Setting new values for the genome color. Scaling the chromosomes¶ If you want to change the default scaling of the sequences select and type in a new tick distance in bp. The labeling of the ticks is changing as well, because every tenth tick is labeled by default. When you want to change the tick labels you can type in their frequency in the current tab. If you set the tick distance to 10000bp and the label frequency to 3 you get the following image. AliTV offers easy scaling of chromosomes. Features of AliTV¶ Main Screen¶ Above is the user interface of AliTV available on the HTML page when you generate the figure. User interface of AliTV Contains information of the software as well as direct links to the demo version, the manual and the code documentation. Will filter the alignments according to their identity and length by using the sliders. With the changes are submitted. On top of that this tab contains the information about all links, features or chromosomes which are hidden in the current settings. By using the selectors you can show a specifc hidden element again. With clicking the changes are submitted. AliTV provides the possibility to show genes, inverted repeats, repeats and N-stretches by default. If you assigned the necessary data to AliTV you can configurate them by using this feature. With the checkboxes genes are shown, hidden or labeled. You can choose between a rect or an arrow and you can color them by using the colorpicker. With you submit the changes. It is the same procedure for inverted repeats, repeats and N-stretches. But it is important that you assign the data to AliTV. Otherwise no biolgoical features are visualized. With AliTV it is possible to visualize custom features like specific gene groups, t-RNAs and other crazy stuff you want to show on the chromosomes. Therefore you can type in the name of your custom feature group, select a form and choose a color. That’s it! As you can see it is very easy to visualize every biological stuff with AliTV. Here you can decide wether the phylogenetic tree is drawn or not. Moreover you can show the tree left or right to the alignments and you can change its width. All parameters that deal with color, size and layout of the AliTV image can be setted here. You can change the image size, the colors of chromosomes and links and the labels. AliTV uses the JSONEditor in order to offer you the possibility for changing parameters, filters and data structure directly next to the figure. By clicking the editor appears at the bottom of the HTML page. First you see the structure of an AliTV object with data, filters and conf. data contains the data and it simply consists of karyo, links, features (optional) and tree (optional). All graphical parameters like color, size, layout, etc. and other configuration for drawing the AliTV image are written in conf. Specific configurations like hidden chromosomes, links or features and the minimal and maximal link identity and length are assigned to filters. All modifications you made are submitted by clicking . For more information about the object structure and the means of the variables check out the documentation of AliTV. If you have problems with the JSONEditor the following link may be helpful for you:. If you want to use the AliTV image you have to download it. With you can download the AliTV image as SVG and open it with your favourite image-viewer. Moreover you can export the current settings in the JSON format by selecting . It may be helpful if you cant to save the settings and use it some other time. Then you can import JSON data by clicking . Generating json files with alitv.pl¶ In case you want to visualize your own data you need to generate a json file. This can be done using the alitv.pl perl script. It has a simple mode where you just call it with a bunch of fasta files. In this case pairwise alignments are calculated using lastz and a json file with default settings is created. For more advanced usage there is the possibility to supply a yml file with custom parameters. Please consult the README of the AliTV-perl-interface project for more information: Here is an outline of the steps required to reproduce the demo data sets: Chloroplasts¶ This dataset consists of the chloroplasts of seven parasitic and non-parasitic plants. All of those are published at NCBI with accession numbers: NC_025642.1, NC_001568.1, NC_022859.1, NC_001879.2, NC_013707.2, NC_023464.1, and NC_023115.1 Along with the fasta files genbank files with annotations are available. So it is easy to extract the locations of ndh and ycf genes as well as the inverted repeat regions. This data set is also used as test set in AliTV-perl-interface so best refer to and specifically the input.yml file there. Bacteria¶ This dataset consists of four strains of Xanthomonas arboricola of which two are pathogenic and two are not (Cesbron S, Briand M, Essakhi S, et al. Comparative Genomics of Pathogenic and Nonpathogenic Strains of Xanthomonas arboricola Unveil Molecular and Evolutionary Events Linked to Pathoadaptation. Frontiers in Plant Science. 2015;6:1126. doi:10.3389/fpls.2015.01126.). Data is available for download from: The xanthomonas.yml file has the following content: --- genomes: - name: Xanthomonas arboricola pv. juglandis JZEF sequence_files: - JZEF01.1.fsa_nt - name: Xanthomonas arboricola pv. juglandis JZEG sequence_files: - JZEG01.1.fsa_nt - name: Xanthomonas arboricola JZEH sequence_files: - JZEH01.1.fsa_nt - name: Xanthomonas arboricola JZEI sequence_files: - JZEI01.1.fsa_nt alignment: program: lastz parameter: - "--format=maf" - "--ambiguous=iupac" - "--strand=both" - "--notransition" - "--step=20" So the parameters for lastz are set a little less sensitive (compared to default settings) to reduce runtime. The resulting json file is still >25MB in size and contains lots of very short links with low identity. In order to have better performance in the visualization those can be filtered on the json level with alitv-filter.pl The commands to create the final json are: alitv.pl xanthomonas.yml --project xanthomonas alitv-filter.pl --in xanthomonas.json --out xanthomonas_arboricola.json --min-link-len 1000 --min-link-id 60 Chromosome 4¶ To demonstrate the capability of visualizing whole mammalian size chromosomes we imported a pre-calculated alignment of human and chimp chromosome 4 from Ensembl. This three files have been downloaded: - - - After unpacking all alignments those to chromosome 4 of human were combined. Usually alitv.pl handles tasks like renaming ids to make them unique transparently. However, this can not be done when importing pre-calculated alignments. In our example both chromosomes have the id 4 and are prefixed in the maf file with homo_sapiens. and pan_troglodytes. respectively. Furthermore the file chr4.maf contains all alignments to chromosome 4 of human and we only want those from chromosome 4 of chimp. So to combine and clean the maf and fasta files and to prepare them for import into AliTV you can do the following: zcat *H.sap.4.* >chr4.maf perl -pe 's/homo_sapiens\.4/h4/;s/pan_troglodytes.4/p4/' chr4.maf >chr4.fixID.maf perl -ne 'print if(/^#/);if(/^a/){$s=$_}if(/^s h4/){$h=$_}if(/^s p4/){print $s.$h.$_."\n"}' chr4.fixID.maf >chr4.clean.maf perl -pe 's/^>4/>h4/' Homo_sapiens.GRCh38.dna.chromosome.4.fa >Homo_sapiens_chr4.fa perl -pe 's/^>4/>h4/' Pan_troglodytes.CHIMP2.1.4.dna.chromosome.4.fa >Pan_troglodytes_chr4.fa Now alitv.pl can be executed with the chr4.yml: --- genomes: - name: Homo sapiens sequence_files: - Homo_sapiens_chr4.fa - name: Pan troglodytes sequence_files: - Pan_troglodytes_chr4.fa alignment: program: importer parameter: - "chr4.clean.maf"
https://alitv.readthedocs.io/en/latest/manual.html
2021-10-16T02:51:41
CC-MAIN-2021-43
1634323583408.93
[array(['_images/AliTV_logo.png', 'image'], dtype=object) array(['_images/userInterface.png', 'User interface of AliTV'], dtype=object) ]
alitv.readthedocs.io
The LIGO Data Grid (LDG)¶ The LIGO Data Grid (LDG) is a coordinated set of distributed computing centres operated by LIGO member groups as a subset of the IGWN Computing Grid. These computing centres share authentication infrastructure, among other things, to present a similar user experience across locations. Requesting an account¶ To request an account on the LIGO Data Grid, please complete the form at Access to the LIGO Data Grid¶ LIGO.ORG account holders can access the LIGO Data Grid in a few different ways: - Indirectly with sshvia ssh.ligo.org - Directly with sshusing SSH key - Directly with 'ssh' using Kerberos TGT - Directly with gsissh- soon to be deprecated 1. The ssh.ligo.org portal¶ The SSH portal ssh.ligo.org can be accessed using a LIGO.ORG username and password ssh [email protected] Upon login, users will be presented with an interactive menu from which to choose the host to be connected to. ssh.ligo.org login menu ::: LIGO Data Grid Login Menu ::: Select from these LDG sites: 0. Logout 1. CDF - Cardiff University, Cardiff (UK) [Registered Users Only] 2. CIT - California Institute of Technology, Pasadena, CA (USA) 3. LHO - LIGO Hanford Observatory - Hanford, WA (USA) 4. LLO - LIGO Livingston Observatory - Livingston, LA (USA) 5. NIK - Dutch National e-Infrastructure, Amsterdam (Netherlands) 6. PSU - Penn State University, Pennsylvania, PA (USA) 7. UWM - Center for Gravitation, Cosmology & Astrophysics (Milwaukee, WI, USA) 8. IUCAA - Inter-University Centre for Astronomy & Astrophysics, Pune (India) ------------------------------------------------------------- Z. Specify alternative user account Enter selection from the list above: 2. Direct SSH access with an SSH key¶ LIGO.ORG account holders can manage SSH keys for use in accessing the LIGO Data Grid at. KAGRA account holders can manage SSH keys for use in accessing the LIGO Data Grid at Once an SSH key has been uploaded, it can be used to connect directly to the desired computing centre host: ssh albert.einstein@<hostname> where <hostname> should be replaced with the name of the target login host, e.g. ligo.gravity.cf.ac.uk (the Hawk login host). Please refer to the documentation for each computing centre for a list of login host names. 3. Direct access with Kerberos TGT¶ Kerberos credential is now supported at CIT, LHO and LLO for SSH sessions. For detailed instruction, go to. 4. Direct access with GSISSH - soon to be deprecated¶ LIGO.ORG account holders can connect to LDG hosts using gsissh, an extensions of SSH that uses a pre-created credential. Install the LDG-Client In order to connect to the LDG clusters, the client tools that enable authentication (and authorisation) need to be installed. Installation instructions, for various supported platforms, can be found here. With the LDG-Client software installed, users can create a personal X.509 grid proxy via: ligo-proxy-init albert.einstein Tip Replace albert.einstein with your LIGO.ORG username. and can then continue to access an LDG host using gsissh: gsissh ldas-pcdev1.ligo.caltech.edu
https://computing.docs.ligo.org/guide/computing-centres/ldg/
2021-10-16T02:41:03
CC-MAIN-2021-43
1634323583408.93
[]
computing.docs.ligo.org
article explains how to access Azure Blob storage by mounting storage using the Databricks File System (DBFS) or directly using APIs.. Mount Azure Blob storage containers to DBFS You can mount a Blob storage container or a folder inside a container to DBFS. The mount is a pointer to a Blob storage container, so the data is never synced locally. Important - Azure Blob storage supports three blob types: block, append, and page. You can only mount block blobs to DBFS. - All users have read and write access to the objects in Blob storage containers mounted to DBFS. - To mount a Blob storage container or a folder inside a container, use the following command: dbutils.fs.mount( source = "wasbs://<container-name>@<storage-account-name>.blob.core.windows.net", mount_point = "/mnt/<mount-name>", extra_configs = {"<conf-key>":dbutils.secrets.get(scope = "<scope-name>", key = "<key-name>")}) dbutils.fs.mount( source = "wasbs://<container-name>@<storage-account-name>.blob.core.windows.net/<directory-name>", mountPoint = "/mnt/<mount-name>", extraConfigs = Map("<conf-key>" -> dbutils.secrets.get(scope = "<scope-name>", key = "<key-name>"))) where <storage-account-name>is the name of your Azure Blob storage account. <container-name>is the name of a container in your Azure Blob storage account. <mount-name>is a DBFS path representing where the Blob storage container or a folder inside the container (specified in source) will be mounted in DBFS. <conf-key>can be either fs.azure.account.key.<storage-account-name>.blob.core.windows.netor fs.azure.sas.<container-name>.<storage-account-name>.blob.core.windows.net dbutils.secrets.get(scope = "<scope-name>", key = "<key-name>")gets the key that has been stored as a secret in a secret scope. Access files in your container as if they were local files, for example: # python df = spark.read.text("/mnt/<mount-name>/...") df = spark.read.text("dbfs:/<mount-name>/...") // scala val df = spark.read.text("/mnt/<mount-name>/...") val df = spark.read.text("dbfs:/<mount-name>/...") -- SQL CREATE DATABASE <db-name> LOCATION "/mnt/<mount-name>" Access Azure Blob storage directly This section explains how to access Azure Blob storage using the Spark DataFrame API, the RDD API, and the Hive client. Access Azure Blob storage using the DataFrame API You need to configure credentials before you can access data in Azure Blob storage, either as session credentials or cluster credentials. Run the following in a notebook to configure session credentials: Set up an account access key: spark.conf.set( "fs.azure.account.key.<storage-account-name>.blob.core.windows.net", "<storage-account-access-key>") Set up a SAS for a container: spark.conf.set( "fs.azure.sas.<container-name>.<storage-account-name>.blob.core.windows.net", "<complete-query-string-of-sas-for-the-container>") To configure cluster credentials, set Spark configuration properties when you create the cluster: Configure an account access key: fs.azure.account.key.<storage-account-name>.blob.core.windows.net <storage-account-access-key> Configure a SAS for a container: fs.azure.sas.<container-name>.<storage-account-name>.blob.core.windows.net <complete-query-string-of-sas-for-the-container> Warning These credentials are available to all users who access the cluster. Once an account access key or a SAS is set up in your notebook or cluster configuration, you can use standard Spark and Databricks APIs to read from the storage account: val df = spark.read.parquet("wasbs://<container-name>@<storage-account-name>.blob.core.windows.net/<directory-name>") dbutils.fs.ls("wasbs://<container-name>@<storage-account-name>.blob.core.windows.net/<directory-name>") Access Azure Blob storage using the RDD API Hadoop configuration options are not accessible via SparkContext. If you are using the RDD API to read from Azure Blob storage, you must set the Hadoop credential configuration properties as Spark configuration options when you create the cluster, adding the spark.hadoop. prefix to the corresponding Hadoop configuration keys to propagate them to the Hadoop configurations that are used for your RDD jobs: Configure an account access key: spark.hadoop.fs.azure.account.key.<storage-account-name>.blob.core.windows.net <storage-account-access-key> Configure a SAS for a container: spark.hadoop.fs.azure.sas.<container-name>.<storage-account-name>.blob.core.windows.net <complete-query-string-of-sas-for-the-container> Warning These credentials are available to all users who access the cluster. Access Azure Blob storage from the Hive client Credentials set in a notebook’s session configuration are not accessible to the Hive client. To propagate the credentials to the Hive client, you must set Hadoop credential configuration properties as Spark configuration options when you create the cluster: Configure an account access key: spark.hadoop.fs.azure.account.key.<storage-account-name>.blob.core.windows.net <storage-account-access-key> Configure a SAS for a container: # Using a SAS token spark.hadoop.fs.azure.sas.<container-name>.<storage-account-name>.blob.core.windows.net <complete-query-string-of-sas-for-the-container> Warning These credentials are available to all users who access the cluster. Once an account access key or a SAS is set up in your cluster configuration, you can use standard Hive queries with Azure Blob storage: -- SQL CREATE DATABASE <db-name> LOCATION "wasbs://<container-name>@<storage-account-name>.blob.core.windows.net/"; The following notebook demonstrates mounting Azure Blob storage and accessing data through Spark APIs, Databricks APIs, and Hive.
https://docs.databricks.com/data/data-sources/azure/azure-storage.html
2021-10-16T02:13:21
CC-MAIN-2021-43
1634323583408.93
[]
docs.databricks.com
Global role variables¶ In DebOps there's a strictly controlled separation between Ansible roles. Different roles cannot use variables from another role directly [1] to allow mixing and matching of roles on the playbook level and preserve soft dependencies. The reason for that is that if a role is not included in the currently executed playbook, its variables are not available and this can lead to broken or not idempotent execution. One place where users can define variables that are always guaranteed to be present is the Ansible inventory. However roles cannot modify the inventory directly because inventories come with many shapes and sizes - a YAML file, dynamic script, etc. But since inventory is always available, it can be used to define global variables that are shared between different Ansible roles. The debops__ variable namespace has been designated to be used for global variables. Roles can reference the debops__* variables in their tasks and templates, however their presence is not guaranteed - a default should always be provided. Below you can find a list of debops__* variables which are used across the DebOps roles and playbooks. The variables might not be used everywhere yet, however they will be added or will replace other variables in the future. This boolean variable is meant to be used with the no_log Ansible keyword in tasks that might operate on sensitive information like passwords, encryption keys, and the like. Setting the value to True will prevent Ansible from logging the sensitive contents or displaying any changes made to the files in the --diff output. For example, use the debops__no_log variable to control when a task can send log messages and diff output about its operation: - name: Create an UNIX account user: name: 'example-user' password: '{{ "example-password" | password_hash('sha512") }}' state: 'present' no_log: '{{ debops__no_log | d(True) }}' This is a similar case, but adds support for lists and automatically shows or hides task output depending on presence of a specific parameter: - name: Create an UNIX account user: name: '{{ item.name }}' password: '{{ item.password | d(omit) }}' state: '{{ item.state | d("present") }}' loop: '{{ users__accounts }}' no_log: '{{ debops__no_log | d(item.no_log | d(True if item.password|d() else False)) }}' An example use on the command line to debug an issue without changing the inventory variables: ansible-playbook -i <inventory> -l <hostname> -e 'debops__no_log=false' play.yml Many Ansible modules related to file operations support the unsafe_writes parameter to allow operations that might be dangerous or destructive in certain conditions, but allow Ansible to work in specific environments, like bind-mounted files or directories. The debops__unsafe_writes variable allows activation of this mode per-host using Ansible inventory, for all roles that implement it. To have an effect, roles that depend on the unsafe writes to function, should use the parameter in relevant tasks, like this: - name: Generate configuration file template: src: 'etc/application.conf.j2' dest: '/etc/application.conf' mode: '0644' unsafe_writes: '{{ debops__unsafe_writes | d(omit) }}' Footnotes
https://docs.debops.org/en/latest/user-guide/global-variables.html
2021-10-16T02:47:23
CC-MAIN-2021-43
1634323583408.93
[]
docs.debops.org
Active IQ Unified Manager (formerly OnCommand Unified Manager) helps you to monitor a large number of systems running ONTAP software through a centralized user interface. The Unified Manager server infrastructure delivers scalability, supportability, and enhanced monitoring and notification capabilities. The key capabilities of Unified Manager include monitoring, alerting, managing availability and capacity of clusters, managing protection capabilities, and bundling of diagnostic data and sending it to technical support. You can use Unified Manager to monitor your clusters. When issues occur in the cluster, Unified Manager notifies you about the details of such issues through events. Some events also provide you with a remedial action that you can take to rectify the issues. You can configure alerts for events so that when issues occur, you are notified through email, and SNMP traps. You can use Unified Manager to manage storage objects in your environment by associating them with annotations. You can create custom annotations and dynamically associate clusters, storage virtual machines (SVMs), and volumes with the annotations through rules. You can also plan the storage requirements of your cluster objects using the information provided in the capacity and health charts, for the respective cluster object.
https://docs.netapp.com/ocum-99/topic/com.netapp.doc.onc-um-ag/GUID-3FE8708C-5344-44F3-9EBA-D5F8878D51E1.html
2021-10-16T04:00:21
CC-MAIN-2021-43
1634323583408.93
[]
docs.netapp.com
scipy.interpolate.splantider¶ - scipy.interpolate.splantider(tck, n=1)[source]¶ Compute the spline for the antiderivative (integral) of a given spline. - Parameters - tckBSpline instance or a tuple of (t, c, k) Spline whose antiderivative to compute - nint, optional Order of antiderivative to evaluate. Default: 1 - Returns - BSpline instance or a tuple of (t2, c2, k2) Spline of order k2=k+n representing the antiderivative of the input spline. A tuple is returned iff the input argument tck is a tuple, otherwise a BSpline object is constructed and returned. Notes The splderfunction is the inverse operation of this function. Namely, splder(splantider(tck))is identical to tck, modulo rounding error. New in version 0.13.0. Examples >>> from scipy.interpolate import splrep, splder, splantider, splev >>> x = np.linspace(0, np.pi/2, 70) >>> y = 1 / np.sqrt(1 - 0.8*np.sin(x)**2) >>> spl = splrep(x, y) The derivative is the inverse operation of the antiderivative, although some floating point error accumulates: >>> splev(1.7, spl), splev(1.7, splder(splantider(spl))) (array(2.1565429877197317), array(2.1565429877201865)) Antiderivative can be used to evaluate definite integrals: >>> ispl = splantider(spl) >>> splev(np.pi/2, ispl) - splev(0, ispl) 2.2572053588768486 This is indeed an approximation to the complete elliptic integral \(K(m) = \int_0^{\pi/2} [1 - m\sin^2 x]^{-1/2} dx\): >>> from scipy.special import ellipk >>> ellipk(0.8) 2.2572053268208538
https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.splantider.html
2021-10-16T02:31:05
CC-MAIN-2021-43
1634323583408.93
[]
docs.scipy.org
In this security release, we have been able to close security gaps of the threat level "medium" and "critical". All listed issues were discovered in an internal penetration test. Affected are the Shopware versions from 6.1.0 up to and including 6.4.3.0. The following vulnerability has been fixed with this security update: NEXT-15601: Manipulation of product reviews via API. NEXT-15673: Authenticated server-side request forgery in file upload via URL. NEXT-15677: Cross-Site Scripting via SVG media files. NEXT-15675: Insecure direct object reference of log files of the Import/Export feature. NEXT-15669: Command injection in mail agent settings. We recommend updating to the current version 6.4.3.1. You can get the update to 6.4.3.1 regularly via the Auto-Updater or directly via the download overview. For older versions of 6.3 and lower, corresponding security measures are also available via a plugin.
https://docs.shopware.com/en/shopware-6-en/security-updates/security-update-08-2021
2021-10-16T02:29:47
CC-MAIN-2021-43
1634323583408.93
[]
docs.shopware.com
Code searchCode search Search code across all your repositories and code hosts A recently published research paper from Google and a Google developer survey showed that 98% of developers consider their Sourcegraph-like internal code search tool to be critical, and developers use it on average for 5.3 sessions each day, primarily to (in order of frequency): - find example code - explore/read code - debug issues - determine the impact of changes Sourcegraph code search helps developers perform these tasks more quickly and effectively by providing fast, advanced code search across multiple repositories. Getting startedGetting started New to search? See search examples for inspiration. Intro to code search video Watch the intro to code search video to see what you can do with Sourcegraph search. Learn the search syntax Learn the search syntax for writing powerful search queries. Explanations - Search features - Use regular expressions and exact queries to perform full-text searches. - Perform language-aware structural search on code structure. - Search any branch and commit, with no indexing required. - Search commit diffs and commit messages to see how code has changed. - Narrow your search by repository and file pattern. - Define saved search scopes for easier searching. - Use search contexts to search across a set of repositories at specific revisions. - Curate saved searches for yourself or your org. - Set up notifications for code changes that match a query. - View language statistics for search results. - Search details - Sourcegraph cloud - Search tips
https://docs.sourcegraph.com/code_search
2021-10-16T02:20:22
CC-MAIN-2021-43
1634323583408.93
[]
docs.sourcegraph.com
Tips for Choosing the Best Educational Leader People that are looking for educational leaders should always take their time and choose the one that offers the best consulting services. At least you should take some of your time and choose the leader that is properly educated and has enough knowledge. The current market has several leaders that are present and might support you. You should learn from various sources if you are to discover more about these leaders. More information about a good leader will be gathered after you decide to read more here from these sources. The information that these sources provide has helped a lot of people sort out their needs. Therefore, ensure you have sufficient information before you decide to choose the professional. Other people can also offer you the appropriate information. Such people will be very important on your side. At least you should click here for more information about a good educational leader. You should choose the leader that is educated.. Once you identify them, it will be easier for you to sort out those that are educated from the rest. This process will be simpler for you and you won’t waste a lot of time. You might also decide to engage with different people. These people might provide you with a lot of information about educated leaders. At least they have engaged with several of them hence know those that will deliver. Finally, you should consider using online reviews. More information about various leaders is always present on online reviews. The moment you choose to use them, you will get the needed support. A link to various pages that have a lot of information about the leader can be provided. Also, you will discover more about this product through the information provided. Once you read through the available reviews, you will get the chance to know if the leader managed to meet the demands of his clients. When you decide to use these reviews, they will give you more support that will help you. Since they will help you, it will be right to consider them. Resource: i loved this
http://docs-prints.com/2021/01/25/my-most-valuable-tips-16/
2021-10-16T03:24:04
CC-MAIN-2021-43
1634323583408.93
[]
docs-prints.com
What is a _XLSX file? A file with ._xlsx extension is actually an XLSX file that is renamed by some other application. This can happen in certain cases when the filename contains a . at the end of the file name. The _XLSX files can be opeend in Microsoft Excel similar to the XLSX files by again renaming these to .xlsx extension. _XLSX File Format - More information The _XLSX files are no difference than the XLSX files and use the Open XML standard adopted by Microsoft back in 2000. Before XLSX, XLS was the primary file format used for working with Excel spreadsheets to save documents in binary format. This new XML based file format came with advantages such as small file sizes, resistance to file corruption, and well-formatted images representation. This new XML based file format became part of Office 2007 and is carried out in the new versions of Microsoft as well. _XLSX File Format Specifications As an _XLSX file is an XLSX file renamed, it has the same specifications as the original file. It is an archive file that is based on the ZIP archival file format. If you want to see the contents of this archive, just rename the file to .zip extension and
https://docs.fileformat.com/spreadsheet/_xlsx/
2021-10-16T03:14:13
CC-MAIN-2021-43
1634323583408.93
[]
docs.fileformat.com
GetSoftwareEncryptionAtRestInfo Contributors Download PDF of this page You can use the GetSoftwareEncryptionAtRestInfo method to get software encryption-at-rest information the cluster uses to encrypt data at rest. Parameters This method has no input parameters. Return values This method has the following return values: Request example Requests for this method are similar to the following example: { "method": "getsoftwareencryptionatrestinfo" } Response example This method returns a response similar to the following example: { "rekeyMasterKeyAsyncResultID": 1, "state": "abcdefghij", "version": 1, "masterKeyInfo": { "keyProviderID": 1, "keyManagementType": "abcdefghij", "keyID": "abcdef01-1234-5678-90ab-cdef01234567", "keyCreatedTime": "abcdefghij" } } New since version 12.3
https://docs.netapp.com/us-en/element-software/api/reference_element_api_getsoftwareencryptionatrestinfo.html
2021-10-16T04:03:16
CC-MAIN-2021-43
1634323583408.93
[]
docs.netapp.com
Are you working on [feature]?# We're working as fast as we can! We are working on building new features as well as providers, in case something is missing please check our Github Discussions, to quickly summarize: - A Unified API for triggering all type of product based notifications. - Easy to use provider registration as well as provider template to create new ones. - Proper declarative template store, that is easy to manage, outside your code business logic. - JavaScript/Nodejs library and APIs, implementing OCL. - Theming, properties, and filters for templates, so you won't need to write that in your code. Why should this and not [provider name]?# While it is perfectly reasonable to just use [provider name], it generates multiple issues, first and foremost, separation of concerns, a business logic unit, should not manage provider API, just you business logic. Secondly, once you started using [provider name] in your code, you start manage templates, permutations and filters in your code, hence the code becomes unreadable and tightly coupled. OCL solves this. Do you support [provider name]?# You can see our providers in this providers directory, if something is missing you are more than welcome to suggest or upvote a provider here. Lastly you can create you very own provider, following the this guide. Do you have a library for [some other language]?# We currently have a JavaScript library. You can vote on or suggest a new client library for your favorite language. There is a similar library for python called Notifiers. While Notifiers is not exactly the same, it does solve a lot of the issues, mainly around provider coupling.
https://docs.notifire.co/docs/community/faq
2021-10-16T04:05:16
CC-MAIN-2021-43
1634323583408.93
[]
docs.notifire.co
Use data retention strategies to schedule and manage your database cleanup Use the data retention tool configure_db_maintenance.py as a single tool for setting scheduled and automatic deletion of unused or outdated data from the PostgreSQL database. This tool works with all models: containers, indicators, audit logs, device profiles for mobile registration, notifications, and playbook run records. - Model - Any item that is a record in the PostgreSQL database. A model is defined by a set of characteristics that determine what kind of information the record represents. For example, a container is a model for data retention strategies. - Strategy - The configurable parameters that define when a record should be deleted when the tool is run, or to define when records should be deleted automatically. To use the configure_db_maintenance.py tool, follow these steps: - SSH to your instance. SSH <username>@<phantom_hostname> - Use the following tool to manage data deletion based on your installation. - For an unprivileged installation, use this command: phenv python /opt/phantom/www/manage.py configure_db_maintenance - For a privileged installation, use this command: sudo phenv python /opt/phantom/www/manage.py configure_db_maintenance - Append your desired argument to the data retention tool command line to schedule, list, enable, or disable data retention actions. On clustered systems, the configure_db_maintenance.py tool can be run from any node, but only the leader node runs the data retention strategy. Data retention tool arguments Append the --help argument to your tool to get information on the data retention tool arguments; Optional arguments Use these optional arguments to manage your data retention strategy. You must specify the target model to add, delete, enable, or disable a model. Add a model to your data retention strategy The following arguments are required to successfully add a model to the data retention strategy. If you add a data retention strategy for a model that already has one, the new strategy replaces the existing strategy. Edit a model's entry in your data retention strategy The following arguments are required to edit a model in the data retention strategy. Examples Delete indicator records after three months: Change the schedule on which configure_db_maintenance runs: This documentation applies to the following versions of Splunk® Phantom: 4.10, 4.10.1, 4.10.2, 4.10.3, 4.10.4, 4.10.6, 4.10.7 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Phantom/4.10.3/Admin/DataRetention
2021-10-16T02:05:31
CC-MAIN-2021-43
1634323583408.93
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Run the setup command that is appropriate for the operating system as shown in Server setup command syntax for Wizard mode. Click OK in the Warning dialog box. If stopping services is necessary, you will be prompted with specific instructions later in the installation process. - Click Next in the Welcome screen. Read and accept the end user license agreement and click Next. If the installation program detects an existing installation of the same product, the Upgrade or Install screen appears. In the Upgrade or Install screen, select Install products to a new directory. Click Next to accept the default installation directory or type your preferred directory and click Next. The default installation directory is: If you specify a directory, the directory name cannot contain spaces. If the specified directory does not exist, it will be created. If you do not have write privileges, an error message appears. Click Next. In the Services Selection screen, select the products that you want to install as services and click Next. If you do not install services at this point, you will need to install them manually later. For Service Assurance Manager services, you have two choices: Select VMware Smart Assurance Servelet Engine if you plan to run only the ic-business-dashboard service. In the Broker Specification screen, specify the VMware Smart Assurance Broker. If you are installing the Broker as a service or server way, specify the port and hostname. If the Broker is already running on this host, keep the default values. If the Broker is running on another host, specify the hostname of that system and the port that the Broker uses. Click Next to continue. The Installation Criteria screen appears. Review the list of products that will be installed and the target installation directory. At the bottom of the list, the total amount of disk space that is required for the selected products is provided so that you can verify that adequate disk space is available. To install the products, click Next and the Installation Progress screen appears. Upon completion, and then reboot your system. Otherwise, click Finish.
https://docs.vmware.com/en/VMware-Smart-Assurance/0.0/Getting-Started-with-VMware-Smart-Assurance/GUID-40323F67-1B4E-44F5-8FE4-620A7320806E.html
2021-10-16T02:53:42
CC-MAIN-2021-43
1634323583408.93
[]
docs.vmware.com
Address Verification Process Bolt Checkout includes address verification to ensure shoppers enter a correct shipping address. When a shopper enters an unrecognized shipping address, Bolt Checkout highlights the shipping address field and prompts the shopper to verify the address. A shopper may still confirm an address that has been flagged as unrecognized. This reduces friction and enables the completion of orders for shoppers whose addresses may not reside in common databases. APT, Unit, & Floor Validation Address verification also detects whether shoppers have properly entered information about their apartment or unit. Bolt also verifies if the unit entered valid in the building, and notifies the shopper if that unit is unknown. Address verification currently only applies to US addresses. If a country other than the US is selected, Bolt checkout will not perform address verification. Address Validation In addition to Address Verification, Bolt Checkout uses Address Validation to ensure that a shopper has entered a valid shipping address. This reduces support overhead and minimizes shipping mistakes and delays. There are three components to Address Validation: Address Matching, Address Fixing, and Address Reentry. Address Matching During Address Matching, Bolt will compare the address entered in checkout to a third-party address verification database, Lob. If any of the address fields entered by the shopper do not match the validated fields in the database, Bolt will automatically update the address to the validated address. shoppers are given the opportunity to opt-out of address updates, as shown below in Address Fixing. Address Fixing If Bolt updates a shopper’s address to a validated address, as outlined above, they will see a notice that their address has been updated and will be given the option to review. Review If a shopper selects REVIEW, they will see their original address and the validated address, and will be able to toggle between the two. If the shopper. FAQs Q: Why does Bolt proactively update the address instead of making the shopper choose? A: Bolt is dedicated to eliminating friction in the checkout experience. Bolt has concluded that address validation is generally correct, and so burdening the shopper with the choice over small tweaks in their address is unnecessary. shoppers will still be able to opt-out of the change, but in most situations, this should be an improvement for shoppers. Q: If ‘Street’ not ‘St.’ is used, will shoppers be prompted to correct their address? A: Validation will only be triggered when there’s a major issue with the address (such as a mistaken zip code or invalid street name). Bolt will ignore updates that are simply updating to abbreviations.
https://docs.bolt.com/merchants/references/checkout-features/address-verification/
2021-10-16T02:57:14
CC-MAIN-2021-43
1634323583408.93
[]
docs.bolt.com
Release Notes Release Notes Chocolatey Release Notes - ChocoCCM Summary This covers the release notes for the ChocoCCM PowerShell Module, which is available for installation from the PowerShell Gallery. For more information, installation options, etc, please refer to ChocoCCM. 📝 NOTE This PowerShell Module requires an installation of at least CCM v0.4.0 in order to be fully compatible. 0.2.0 (April 1, 2021) BUG FIXES - Fix - Add-CCMGroup throws HTTP 400error - Fix - Add-CCMGroupMember throws HTTP 400error - Fix - Fetching Deployment by ID fails when Deployment is not in Draft or Ready state - Fix - Get-CCMGroupMember has incorrect URL - Fix - Export-CCMDeploymentReport is not exported by the module IMPROVEMENTS - Added a new Remove-CCMGroup cmdlet to allow removal of a group - Added a new Remove-CCMGroupMember cmdlet to allow removal a computer or group from a CCM group - Added functionally to the Get-CCMDeploymentStep cmdlet to allow retrieval of deployment steps, results, and logs from a deployment step and its computers 0.1.1 (December 4, 2020) BUG FIXES - Fix - New-CCMDeploymentStep Throws HTTP 400 Error - Fix - Not all functions are returning all objects by default 0.1.0 (November 13, 2020) Initial preview release FEATURES - PowerShell functions are provided for interacting with the core entities within CCM via the Web API - Roles - Groups - Computers - Deployments - Outdated Software - Reports
https://docs.chocolatey.org/en-us/central-management/chococcm/release-notes
2021-10-16T03:07:07
CC-MAIN-2021-43
1634323583408.93
[]
docs.chocolatey.org
What is a DRV file? The files with a .drv extension belong to the Windows Device Driver. Windows Operating System uses these files to connect internal and external hard devices. DRV files are consist of instructions and parameters for how a device and operating system link together. These files help to install device drivers for proper functioning with Windows. Also, the devices linked with the PC’s motherboard via bus or cable require DRV files. DRV File Format DRV files are usually packaged as dynamic libraries (DDL files) or EXE files. These files can be supported on all system platforms, such as smartphones, and there is no assurance each platform will properly support these files. However, some most common devices that use the .drv file extension are: - Sound cards - Graphic cards - Printers - Storage devices - Network adapters - Computer hardware accessories It is recommended not to open DRV files sent via email because this file format contains viruses and other malicious programs. Make sure to perform a comprehensive scan before opening any unknown DRV file. DRV Example // Include necessary files... #include <font.defs> #include <media.defs> #include <hp.h> #include <epson.h> #include <label.h> // Localizations are provided for all of the base languages supported by // CUPS... #po ar "" #po ca "" #po de "" #po el "" #po es "" #po fr "" #po no "" #po ru "" #po sk "" #po sv "" #po th "" #po tr "" #po uk "" // MediaSize sizes used by label drivers... #media "w90h18/1.25x0.25\"" 90 18 #media "w90h162/1.25x2.25\"" 90 162 #media "w108h18/1.50x0.25\"" 108 18 #media "w108h36/1.50x0.50\"" 108 36 #media "w108h72/1.50x1.00\"" 108 72 #media "w108h144/1.50x2.00\"" 108 144 #media "w144h26/2.00x0.37\"" 144 26 #media "w576h360/8.00x5.00\"" 576 360 #media "w576h432/8.00x6.00\"" 576 432 #media "w576h468/8.00x6.50\"" 576 468 // Common stuff for all drivers... Attribute "cupsVersion" "" "2.2" Attribute "FileSystem" "" "False" Attribute "LandscapeOrientation" "" "Plus90" Attribute "TTRasterizer" "" "Type42" Font * Version "2.1" // Dymo Label Printer { Manufacturer "Dymo" ModelName "Label Printer" Attribute NickName "" "Dymo Label Printer" PCFileName "dymo.ppd" DriverType label ModelNumber $DYMO_3x0 Throughput 8 ManualCopies Yes ColorDevice No HWMargins 2 14.9 2 14.9 *MediaSize w81h252 MediaSize w101h252 MediaSize w54h144 MediaSize w167h288 MediaSize w162h540 MediaSize w162h504 MediaSize w41h248 MediaSize w41h144 MediaSize w153h198 Resolution k 1 0 0 0 136dpi Resolution k 1 0 0 0 203dpi *Resolution k 1 0 0 0 300dpi Darkness 0 Light Darkness 1 Medium *Darkness 2 Normal Darkness 3 Dark }
https://docs.fileformat.com/system/drv/
2021-10-16T02:44:39
CC-MAIN-2021-43
1634323583408.93
[]
docs.fileformat.com
Setup Wizard¶ The first time a user logs into the pfSense® software GUI, the firewall presents the Setup Wizard automatically. The first page of the wizard is shown in Figure Setup Wizard Starting Screen. Click Next to proceed. Tip Using the setup wizard is optional. Click the logo at the top left of the page to exit the wizard at any time. The next screen of the wizard explains the availability of support from Netgate. Click Next again to start the configuration process using the wizard. General Information Screen¶ The next screen (Figure General Information Screen) configures the name of this firewall, the domain in which it resides, and the DNS servers for the firewall. - Hostname The Hostname is a name that should uniquely identify this firewall. It can be nearly anything, but must start with a letter and it may contain only letters, numbers, or a hyphen. - Domain Enter a Domain, e.g. example.com. If this network does not have a domain, use <something>.home.arpa, where <something>is another identifier: a company name, last name, nickname, etc. For example, company.home.arpaThe hostname and domain name are combined to make up the fully qualified domain name of this firewall. - Primary/Secondary DNS Server The IP address of the Primary DNS Server and Secondary DNS Server, if known. These DNS servers may be left blank if the DNS Resolver will remain active using its default settings. The default configuration has the DNS Resolver active in resolver mode (not forwarding mode), when set this way, the DNS Resolver does not need forwarding DNS servers as it will communicate directly with Root DNS servers and other authoritative DNS servers. To force the firewall to use these configured DNS servers, enable forwarding mode in the DNS Resolver or use the DNS Forwarder. If this firewall has a dynamic WAN type such as DHCP, PPTP or PPPoE these may be automatically assigned by the ISP and can be left blank. Note The firewall can have more than two DNS servers, add more under System > General Setup after completing the wizard. - Override DNS When checked, a dynamic WAN ISP can supply DNS servers which override those set manually. To force the use of only the DNS servers configured manually, uncheck this option. See also For more information on configuring the DNS Resolver, see DNS Resolver Click Next to continue. NTP and Time Zone Configuration¶ The next screen (Figure NTP and Time Zone Setup Screen) has time-related options. - Time server hostname A Network Time Protocol (NTP) server hostname or IP address. Unless a specific NTP server is required, such as one on LAN, the best practice is to leave the Time server hostname at the default 2.pfsense.pool.ntp.org. This value will pick a set of random servers from a pool of known-good NTP hosts. To utilize multiple time server pools or individual servers, add them in the same box, separating each server by a space. For example, to use three NTP servers from the pool, enter: 0.pfsense.pool.ntp.org 1.pfsense.pool.ntp.org 2.pfsense.pool.ntp.org This numbering is specific to how .pool.ntp.orgoperates and ensures each address is drawn from a unique pool of NTP servers so the same server does not get used twice. - Timezone Choose a geographically named zone which best matches location of this firewall, or any other desired zone. Click Next to continue. WAN Configuration¶ The next page of the wizard configures the WAN interface of the firewall. This is the external network facing the ISP or upstream router, so the wizard offers configuration choices to support several common ISP connection types. - WAN Type The Selected Type (Figure WAN Configuration) must match the type of WAN required by the ISP, or whatever the previous firewall or router was configured to use. Possible choices are Static, DHCP, PPPoE, and PPTP. The default choice is DHCP due to the fact that it is the most common, and for the majority of cases this setting allows a firewall to “Just Work” without additional configuration. If the WAN type is not known, or specific settings for the WAN are not known, this information must be obtained from the ISP. If the required WAN type is not available in the wizard, or to read more information about the different WAN types, see Interface Types and Configuration. Note If the WAN interface is wireless, additional options will be presented by the wizard which are not covered during this walkthrough of the standard Setup Wizard. Refer to Wireless, which has a section on Wireless WAN for additional information. If any of the options are unclear, skip the WAN setup for now, and then perform the wireless configuration afterward. - MAC Address This field, shown in Figure General WAN Configuration, changes the MAC address used on the WAN network interface. This is also known as “spoofing” the MAC address. Note The problems alleviated by spoofing a MAC address are typically temporary and easily worked around. The best course of action is to maintain the original hardware MAC address, resorting to spoofing only when absolutely necessary. Changing the MAC address can be useful when replacing an existing piece of network equipment. Certain ISPs, primarily Cable providers, will not work properly if a new MAC address is encountered. Some Internet providers require power cycling the modem, others require registering the new address over the phone. Additionally, if this WAN connection is on a network segment with other systems that locate it via ARP, changing the MAC to match and older piece of equipment may also help ease the transition, rather than having to clear ARP caches or update static ARP entries. Warning If this firewall will ever be used as part of a High Availability Cluster, do not spoof the MAC address. - Maximum Transmission Unit (MTU) The MTU field, shown in Figure General WAN Configuration, can typically be left blank, but can be changed when necessary. Some situations may call for a lower MTU to ensure packets are sized appropriately for an Internet connection. In most cases, the default assumed values for the WAN connection type will work properly. - Maximum Segment Size (MSS) MSS, shown in Figure General WAN Configuration can typically be left blank, but can be changed when necessary. This field enables MSS clamping, which ensures TCP packet sizes remain adequately small for a particular Internet connection. - Static IP Configuration If the “Static” choice for the WAN type is selected, the IP address, Subnet Mask, and Upstream Gateway must all be filled in (Figure Static IP Settings). This information must be obtained from the ISP or whoever controls the network on the WAN side of this firewall. The IP Address and Upstream Gateway must both reside in the same Subnet. - DHCP Hostname This field (Figure DHCP Hostname Setting) is only required by a few ISPs. This value is sent along with the DHCP request to obtain a WAN IP address. If the value for this field is unknown, try leaving it blank unless directed otherwise by the ISP. - PPPoE Configuration When using the PPPoE (Point-to-Point Protocol over Ethernet) WAN type (Figure PPPoE Configuration), The PPPoE Username and PPPoE Password fields are required, at a minimum. The values for these fields are determined by the ISP. - PPPoE Username The login name for PPPoE authentication. The format is controlled by the ISP, but commonly uses an e-mail address style such as [email protected]. - PPPoE Password The password to login to the account specified by the username above. The password is masked by default. To view the entered password, check Reveal password characters. - PPPoE Service Name The PPPoE Service name may be required by an ISP, but is typically left blank. When in doubt, leave it blank or contact the ISP and ask if it is necessary. - PPPoE Dial on Demand This option leaves the connection down/offline until data is requested that would need the connection to the Internet. PPPoE logins happen quite fast, so in most cases the delay while the connection is setup would be negligible. If public services are hosted behind this firewall, do not check this option as an online connection must be maintained as much as possible in that case. Also note that this choice will not drop an existing connection. - PPPoE Idle Timeout Specifies how much time the PPPoE connection remain up without transmitting data before disconnecting. This is only useful when coupled with Dial on demand, and is typically left blank (disabled). Note This option also requires the deactivation of gateway monitoring, otherwise the connection will never be idle. - PPTP Configuration The PPTP (Point-to-Point Tunneling Protocol) WAN type (Figure PPTP WAN Configuration) is for ISPs that require a PPTP login, not for connecting to a remote PPTP VPN. These settings, much like the PPPoE settings, will be provided by the ISP. A few additional options are required: - Local IP Address The local (usually private) address used by this firewall to establish the PPTP connection. - CIDR Subnet Mask The subnet mask for the local address. - Remote IP Address The PPTP server address, which is usually inside the same subnet as the Local IP address. These last two options, seen in Figure Built-in Ingress Filtering Options, are useful for preventing invalid traffic from entering the network protected by this firewall, also known as “Ingress Filtering”. - Block RFC 1918 Private Networks Blocks connections sourced from registered private networks such as 192.168.x.xand 10.x.x.xattempting to enter the WAN interface . A full list of these networks is in Private IP Addresses. - Block Bogon Networks When active, the firewall blocks traffic from entering if it is sourced from reserved or unassigned IP space that should not be in use. The list of bogon networks is updated periodically in the background, and requires no manual maintenance. Bogon networks are further explained in Block Bogon Networks. Click Next to continue once the WAN settings have been filled in. LAN Interface Configuration¶ This page of the wizard configures the LAN IP Address and Subnet Mask (Figure LAN Configuration). If this firewall will not connect to any other network via VPN, the default 192.168.1.0/24 network may be acceptable. If this network must be connected to another network, including via VPN from remote locations, choose a private IP address range much more obscure than the common default of 192.168.1.0/24. IP space within the 172.16.0.0/12 RFC 1918 private address block is generally the least frequently used, so choose something between 172.16.x.x and 172.31.x.x to help avoid VPN connectivity difficulties. If the LAN is 192.168.1.x and a remote client is at a wireless hotspot using 192.168.1.x (very common), the client will not be able to communicate across the VPN. In that case, 192.168.1.x is the local network for the client at the hotspot, not the remote network over the VPN. If the LAN IP Address must be changed, enter it here along with a new Subnet Mask. If these settings are changed, the IP address of the computer used to complete the wizard must also be changed if it is connected through the LAN. Release/renew its DHCP lease, or perform a “Repair” or “Diagnose” on the network interface when finished with the setup wizard. Click Next to continue. Set admin password¶ Next, change the administrative password for the GUI as shown in Figure Change Administrative Password. The best practice is to use a strong and secure password, but no restrictions are automatically enforced. Enter the password in the Admin Password and confirmation box to be sure that has been entered correctly. Click Next to continue. Warning Do not leave the password set to the default pfsense. If access to the firewall administration via GUI or SSH is exposed to the Internet, intentionally or accidentally, the firewall could easily be compromised if it still uses the default password. Completing the Setup Wizard¶ That completes the setup wizard configuration. Click Reload (Figure Reload the GUI) and the GUI will apply the settings from the wizard and reload services changed by the wizard. Tip If the LAN IP address was changed in the wizard and the wizard was run from the LAN, adjust the client computer’s IP address accordingly after clicking Reload. When prompted to login again, enter the new password. The username remains admin. After reloading, the final screen of the wizard includes convenient links to check for updates, get support, and other resources. Click Finish to complete and exit the wizard. At this point the firewall will have basic connectivity to the Internet via the WAN and clients on the LAN side will be able to reach Internet sites through this firewall. If at any time this initial configuration must be repeated, revisit the wizard at System > Setup Wizard from within the GUI.
https://docs.netgate.com/pfsense/en/latest/config/setup-wizard.html
2021-10-16T03:48:48
CC-MAIN-2021-43
1634323583408.93
[]
docs.netgate.com
Quickstart step 8: Additional resourcesQuickstart step 8: Additional resources Congratulations on making it to the end of the quickstart guide! Here are some additional resources to help you go further: sg, the Sourcegraph developer tool - How-to guides, particularly: - Background information for more context < Previous | Back to Quickstart index >
https://docs.sourcegraph.com/dev/getting-started/quickstart_8_additional_resources
2021-10-16T03:26:47
CC-MAIN-2021-43
1634323583408.93
[]
docs.sourcegraph.com
Clojure This page discusses using Clojure to interact with Stardog. Page Contents Overview The stardog-clj source code is available under the Apache 2.0 license.. Installation Stardog-clj is available from Clojars. To use, just include the following dependency: [stardog-clj "7.4.0"] Starting with Stardog 6.0.1, the stardog-clj version always matches the latest release of Stardog. API Overview The API provides a natural progression of functions for interacting with Stardog (create-db-spec "testdb" "" "admin" "admin" "none") This creates a connection space for use in connect or make-datasource with the potential parameters: {:url "" )
https://docs.stardog.com/developing/programming-with-stardog/clojure
2021-10-16T03:14:40
CC-MAIN-2021-43
1634323583408.93
[]
docs.stardog.com
The User-Defined Suspicious Objects list allows you to define suspicious file SHA-1 hash value, IP address, URL, and domain objects that Deep Discovery products with Virtual Analyzer have not yet detected on your network. Supported Deep Discovery products can take action on the objects found in the list to prevent the spread of unknown threats.
https://docs.trendmicro.com/en-us/enterprise/deep-discovery-director-(consolidated-mode)-35-online-help/threat-intelligence/custom-intelligence/user-defined-suspici.aspx
2021-10-16T01:52:36
CC-MAIN-2021-43
1634323583408.93
[]
docs.trendmicro.com
Table of Contents Product Index Including a character morph and skin texture set for the Genesis 2 Female that is perfect for a strong female warrior or Super Hero. Head and body also come with separate morph dials. Four makeup versions are available along with five eye textures. Available with sub-surface scattering on or.
http://docs.daz3d.com/doku.php/public/read_me/index/17237/start
2021-10-16T03:53:54
CC-MAIN-2021-43
1634323583408.93
[]
docs.daz3d.com
Some of the users might get this Page Unresponsive popup. I want to know what exactly is a reason for that? I am not asking how to fix it, but I want to know what kind of internal logic triggers that message? Thanks Some of the users might get this Page Unresponsive popup. I want to know what exactly is a reason for that? I am not asking how to fix it, but I want to know what kind of internal logic triggers that message? Thanks This might happens in certain websites and there are many reason behind it such as scripting loop like an script is loading incorrectly or they might be several programs running at the same time and consume a lot of resources and affecting the performance of the Microsoft Edge. When you encourage such a issue, first ask the user about the website or webpage causing this behavior and try to test it out in your system with Microsoft Edge to see whether this is an issue with the website or the system. In case, it reproduce in other systems, you need to inform the website owner about this issue. In case it was specific to one system, check and make sure that Microsoft Edge is fully update. Verify if the issue is reproducible without any extensions and InPrivate Browsing mode and also check resources such as CPU and GPU. Also check edge://settings/system and verify whether this is reproducible when Hardware accelerator is off and if yes, then you will need to check your GPU and update it. So, on the VM it should be happening more often because it has no GPU? VM also has GPU but it is virtual , as I mentioned you need to test and investigate and identify what causing the issue using methods I described earlier. Hi @Zolotoy-3922 This can be caused by many reasons. It can be caused by slow Internet connection. In this situation, you’ll have issues loading certain scripts or others which can make the pages unresponsive. It can be caused by certain website scripts. Some websites use multiple scripts, and sometimes it’s possible that one of those scripts is unresponsive. It can be caused by your computer configuration and resource. If you open multiple tabs while having many applications running in the background, the issue might happen. If the response is helpful, please click "Accept Answer" and upvote it. Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread. Regards, Yu Zhou 5 people are following this question.
https://docs.microsoft.com/en-us/answers/questions/331588/edge-chromium-unresponsive-page.html
2021-10-16T03:18:15
CC-MAIN-2021-43
1634323583408.93
[]
docs.microsoft.com
Path. Figure 1. Path Outage example Horizon. the IP interface on default-gw-01 in the network 192.168.1.0/24 is set as Primary interface. The IP interface in the network 172.23.42.0/24 on default-gw-02 is set as Primary interface. Downtime Model Poller Packages
https://docs.opennms.com/horizon/28.0.1/operation/service-assurance/path-outages.html
2021-10-16T02:44:27
CC-MAIN-2021-43
1634323583408.93
[]
docs.opennms.com
A sale is attributed to PushOwl if a subscriber does a purchase within 72 hours after a push notification is delivered/clicked. PushOwl borrows its revenue attribution model from Display Ads, where the primary purpose is to re-engage your customers. hence we use the impression based attribution for reporting. However we don't charge on revenue but consumption of impressions. In case you want to setup revenue attribution via Google Analytics, you can do so by tracking the parameters on your Analytics dashboard. Tracking web push impact to your revenue on Google Analytics is done through clicks. PushOwl, by default, adds UTM parameters in all push notifications dispatched by our system. You can find detailed information about our UTM parameters here. If you want to set up a custom report on Google Analytics, you can find the step by step instruction to setup Google Analytics here: Setup Google Analytics for PushOwl. Reports in Google Analytics might differ due the difference in attribution model. Welcome notifications, Shipping notifications, and notifications triggered by integration partners (Fera.ai, Judge.me, Loox, etc.) are not considered for revenue attribution.
https://docs.pushowl.com/en/articles/2536710-revenue-attribution-from-notifications
2021-10-16T03:23:08
CC-MAIN-2021-43
1634323583408.93
[array(['https://downloads.intercomcdn.com/i/o/209326637/5cf260062614948294ae6f93/pushowl-google-analytics.png', None], dtype=object) ]
docs.pushowl.com
Browse Abandonment: browse_abandonment Shipping notifications: fulfillment_complete Back in stock alerts: back_in_stock Price drop alerts: price_drop Welcome notifications: post_subscription UTM campaign All campaigns sent from PushOwl contain a unique campaign ID. If the notification was from a campaign, the utm_campaign contains this campaign ID as the parameter (for example: 325975). For automations like abandoned cart reminders, browse abandonment, shipping notifications, welcome notification, etc. the utm_campaign is the push notification's source_id, which is the name of the automation (for example: browse_abandonment). UTM Term A UTM Term ( utm_term) is added for notification types that consist of a sequence of multiple notifications. The UTM Term will then indicate the item in the sequence, for example utm_term=reminder_2. Frequently Asked Questions How can I add my own UTM parameter? If you want to overwrite the default UTM parameters added by PushOwl, simply insert the link with the UTM parameters you want.
https://docs.pushowl.com/en/articles/3174017-adding-utm-parameters
2021-10-16T03:20:44
CC-MAIN-2021-43
1634323583408.93
[]
docs.pushowl.com
- Indexes > - Indexing Tutorials > -. Sort with a Single Field Index¶ If an ascending or a descending index is on a single field, the sort operation on the field can be in either direction. For example, create an ascending index on the field a for a collection records: db.records.ensureIndex( { a: 1 } ) This index can support an ascending sort on a: db.records.find().sort( { a: 1 } ) The index can also support the following descending sort on a by traversing the index in reverse order: db.records.find().sort( { a: -1 } ) }. The sort must specify the same sort direction (i.e.ascending/descending) for all its keys as the index key pattern or specify the reverse sort direction for all its keys as the index key pattern. For example, an index key pattern { a: 1, b: 1 } can support a sort on { a: 1, b: 1 } and { a: -1, b: -1 } but not on { a: -1, b: 1 }. Sort and Index Prefix¶: db.data.ensureIndex( { a:1, b: 1, c: 1, d: 1 } ) Then, the following are prefixes for that index: { a: 1 } { a: 1, b: 1 } { a: 1, b: 1, c: 1 } The following query and sort operations use the index prefixes to sort the results. These operations do not need to sort the result set in memory. Consider the following example in which the prefix keys of the index appear in both the query predicate and the sort: db.data.find( { a: { $gt: 4 } } ).sort( { a: 1, b: 1 } ): { a: 1, b: 1, c: 1, d: 1 } conditions: db.data.find( { a: { $gt: 2 } } ).sort( { c: 1 } ) db.data.find( { c: 5 } ).sort( { c: 1 } ) These operations will not efficiently use the index { a: 1, b: 1, c: 1, d: 1 } and may not even use the index to retrieve the documents.
http://docs.mongodb.org/manual/tutorial/sort-results-with-indexes/
2014-09-15T09:28:02
CC-MAIN-2014-41
1410657104131.95
[]
docs.mongodb.org
Installation and Configuration Guide Local Navigation Your device is not connected to the Internet. Please try again or contact the service provider for more information. Description This message appears in the BlackBerry® Browser 6.0 or later if a user tries to download the BlackBerry® Client for Microsoft® SharePoint® from the installation website. Related tasks Previous topic: This application has been blocked by your system administrator Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/31148/Your_device_is_not_connected_ShrPnt_1.1_1580546_11.jsp
2014-09-15T09:34:49
CC-MAIN-2014-41
1410657104131.95
[]
docs.blackberry.com
We have included a wizard in the GPD plugin to create a jBPM project. We have opted to create a project containing based on a template already containing a number of advanced artifacts that we will ignore for this section. In the future we will elaborate this wizard and offer the possibility to create an empty jBPM project as well as projects based on templates taken from the jBPM tutorial. The jBPM projects in Eclipse are in fact Java projects with a number of additional settings, so we switch to the Eclipse Java perspective. To create a new jBPM project using the project creation wizard, we select 'New->Project...' and in the New Project dialog, we select 'JBoss jBPM -> Process Project' (Figure 2.1, “New Project Dialog ”). Clicking 'Next' brings us to the wizard page where we have to specify the name and location for the project. We choose for example 'Hello jBPM' as the name and accept the default location (Figure 2.2, “Process Name and Location ”). Clicking on Finish results in the project being created. The wizard creates four source folders : one for the processes ('src/process'), one for the java sources ('src/java'), one for the unit tests ('test/java') and one for the resources such as the jbpm.properties and the hibernate.properties files ('src/resources'). In addition a classpath container with all the core jBPM libraries is added to the project (Figure 2.3, “Layout of the Process Project ” Looking inside the different source folders will reveal a number of other artifacts that were generated, but we will leave these untouched for the moment. Instead, we will look at another wizard that enables us to create an empty process definition. When the project is set up, we can use a creation wizard to create an empty process definition. Bring up the 'New' wizard by clicking the 'File->New->Other...' menu item. The wizard opens on the 'Select Wizard' page (Figure 2.4, “The Select Wizard Page ”). Selecting the 'JBoss jBPM' category, then the 'Process Definition' item and clicking on the 'Next' button brings us to the 'Create Process Definition' page (Figure 2.5, “The Create New Process Definion Page ”). We choose 'hello' as the name of the process archive file. Click on the 'Finish' button to end the wizard and open the process definition editor (Figure 2.6, “The Process Definition Editor ”). As you can see in the package explorer, creating a process definition involves creating a folder with the name of the process definition and a '.par' extension and populating this folder with two .xml files : gpd.xml and processdefinition.xml. The first of these two contains the graphical information used by the process definition editor. Though you can view the contents with an ordinary xml editor, the default editor opening this file will be the process definition editor. The processdefinition.xml file contains the actual process definition info without the graphical rendering info. At present, the GPD assumes that these two files are siblings. More sophisticated configuration will be supported later. We will create a very simple process definition, consisting of a begin state, an intermediate state and an end state. Select respectively 'Start', 'State' and 'End' on the tools palette and click on the canvas to add these nodes to the process definition. The result should look similar to Figure 2.7, “A Simple Process With Three Nodes ” We will connect the nodes with transitions. Select the 'Transition' tool in the tools palette and click on the 'Start' node, then move to the 'State' node and click again to see the transition being drawn. Perform the same steps to create a transition from the 'State' node to the 'End' node. The result looks like Figure 2.8, “A Simple Process With Transitions ”. You can see an outline of the process being drawn in the Eclipse outline view if it is visible. The process outline comes in two different flavours. One of them is the classical treeview. You can see the expanded tree outline view of our minimal process in Figure 2.8, “A Simple Process With Transitions ”. The other possibility is to view the outline as a scrollable thumbnail. This possibility is illustrated in Figure 2.9, “The Outline as Thumbnail ”. Because the process definition is still fairly simple with as little as three nodes, this thumbnail is at the moment bigger than the actual graph, but for complex graphs this feature is particularly interesting. You are able to toggle between these two outline views using the outline toolbar buttons. If the Eclipse Properties view is visible, the relevant properties of the selected item are shown. Some of these properties may be directly editable in the properties view. An example of a directly editable property is the name property of the process definition. As you can see in Figure 2.10, “The Properties of a Process Definition ”, the name property of the process definition can be changed to 'jbay'. When we select the first transition, either by clicking on it on the canvas or by clicking on its node in the tree outline view, we see the properties of this transition in the properties view (Figure 2.11, “The Properties of a Transition ”). We are able to edit the name of the transition, but the 'Source' and 'Target' properties are read-only. We change the name of the first transition to 'to_auction'. We repeat this name change for the second transition and name it 'to_end'. Some properties can be directly edited in the graphical editor. One example of this is the 'Name' property of nodes. You can edit this directly by selecting the node of wich you want to change the name and then click once inside this node. This enables an editor in the node as shown in Figure 2.12, “Directly Editing the Node Name ”. We change the name of the node to 'auction'. Now that we have defined a simple process definition, we can have a look at the xml that is being generated under the covers. To see this xml, click on the source tab of the process definition editor. (Figure 2.13, “The Source View ”). This source tab is editable, so if you know your way around in jpdl, you can create or tweak your process definitions directly in the xml source. But if you want to do this, note that at this moment the layout of the drawing may get messy. Intelligent layout algorithms will probably be added in a later stage. Also, the validity of the xml is not yet enforced. This is on the todo list and will be added in the near future.
http://docs.jboss.com/jbpm/v3/gpd/firstprocess.html
2014-09-15T09:26:42
CC-MAIN-2014-41
1410657104131.95
[]
docs.jboss.com
The Jikes RVM manages bugs, feature requests, tasks and patches using an issue tracker. When submitting an issue, please take a moment to read and follow our advice for Reporting Bugs. The Research Archive is also maintained within another issue tracker. Current Trackers Issue Tracker Research Archive Tracker Historic Trackers In 2007, we migrated between different issue trackers. The historic issue trackers are listed below.
http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=74082&selectedPageVersions=5&selectedPageVersions=6
2014-09-15T09:33:02
CC-MAIN-2014-41
1410657104131.95
[]
docs.codehaus.org
Mission Cargo is a thin wrapper around existing containers (e.g. J2EE containers). It provides different APIs to easily manipulate containers. Cargo provides the following APIs: - A Java API to start/stop/configure Java Containers and deploy modules into them. We also offer Ant tasks, Maven 1, Maven 2 plugins. Intellij IDEA and Netbeans plugins are in the sandbox. - A Java API to parse/create/merge J2EE Modules Check the utilisation page to understand what you can use Cargo for.!!
http://docs.codehaus.org/pages/viewpage.action?pageId=111182246
2014-09-15T09:34:12
CC-MAIN-2014-41
1410657104131.95
[]
docs.codehaus.org
. Instructions for this feature can be found by pressing the Help button when in the Article Manager, or by clicking this link. In Joomla! 1.5.8 images might not displaying on contact pages. This issue will be fixed in the next release (1.5.9), but until then here are the steps how you can fix it yourself: 1. Open components/com_contact/views/contact/tmpl/default.php with a text editor. 2. Find this line (line 52): <?php echo JHTML::_('image', '/images/stories' . '/'.$this->contact->image, JText::_( 'Contact' ), array('align' => 'middle')); ?> 3. Change this line to the following and save the file: <. The following 3 pages are in this category, out of 3 total.
http://docs.joomla.org/index.php?title=Category:Version_1.5.8_FAQ&oldid=11693
2014-09-15T10:11:53
CC-MAIN-2014-41
1410657104131.95
[]
docs.joomla.org
Creating glossaries Create a new glossary when you need to create a new context for terms. Typically you would want to create new glossaries when terms are used by significantly different audiences or when the same term has significantly different meanings depending on the context. You can create any number of glossaries; glossary names can include letters, numbers, spaces, and underscores. Create a new glossary from the Glossary tab in the left navigation pane:
https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/atlas-managing-business-terms-with-glossaries/topics/atlas-creating-glossaries.html
2021-07-23T22:30:30
CC-MAIN-2021-31
1627046150067.51
[array(['../images/atlas-glossary-add-callout.png', None], dtype=object)]
docs.cloudera.com
Back up HDFS metadata You can back up HDFS metadata without taking down either HDFS or the NameNodes. Prepare to back up the HDFS metadataRegardless.Backing up NameNode metadataYou must back up the VERSION file and then back up the NameNode metadata.Back up HDFS metadata using Cloudera ManagerHDFS metadata backups can be used to restore a NameNode when both NameNode roles have failed. In addition, Cloudera recommends backing up HDFS metadata before a major upgrade.Restoring NameNode metadataIf both the NameNode and the secondary NameNode were to suddenly go offline, you can restore the NameNode.Restore HDFS metadata from a backup using Cloudera ManagerWhen both the NameNode hosts have failed, you can use Cloudera Manager to restore HDFS metadata.Perform a backup of the HDFS metadataYou can back up HDFS metadata without affecting the availability of NameNode.Parent topic: Backing up HDFS metadata
https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/data-protection/topics/hdfs-back-up-hdfs-metadata.html
2021-07-23T21:53:50
CC-MAIN-2021-31
1627046150067.51
[]
docs.cloudera.com
update_project_team Update Xcode Development Team ID This action updates the Developer Team ID of your Xcode project. 2 Examples update_project_team update_project_team( path: "Example.xcodeproj", teamid: "A3ZZVJ7CNY" ) Parameters * = default value is dependent on the user's system Documentation To show the documentation in your terminal, run fastlane action update_project_team CLI It is recommended to add the above action into your Fastfile, however sometimes you might want to run one-offs. To do so, you can run the following command from your terminal fastlane run update_project_team To pass parameters, make use of the : symbol, for example fastlane run update_project
https://docs.fastlane.tools/actions/update_project_team/
2021-07-23T23:22:39
CC-MAIN-2021-31
1627046150067.51
[]
docs.fastlane.tools
- Global shortcuts - Project - Epics GitLab keyboard shortcuts GitLab has several keyboard shortcuts you can use to access its different features. To display a window in GitLab that lists its keyboard shortcuts, use one of the following methods: - In the Help menu in the top right of the application, select Keyboard shortcuts. In GitLab 12.8 and later, you can disable keyboard shortcuts by using the Keyboard shortcuts toggle at the top of the keyboard shortcut window. Although global shortcuts work from any area of GitLab, you must be in specific pages for the other shortcuts to be available, as explained in each section. Global shortcuts These shortcuts are available in most areas of GitLab: Additionally, the following shortcuts are available when editing text in text fields (for example, comments, replies, issue descriptions, and merge request descriptions): The shortcuts for editing in text fields are always enabled, even if other keyboard shortcuts are disabled. Project These shortcuts are available from any page in a project. You must type them relatively quickly to work, and they take you to another page in the project. Issues and merge requests These shortcuts are available when viewing issues and merge requests: Project files These shortcuts are available when browsing the files in a project (go:
https://docs.gitlab.com/ee/user/shortcuts.html
2021-07-23T21:04:36
CC-MAIN-2021-31
1627046150067.51
[]
docs.gitlab.com
You are viewing documentation for Kubernetes version: v1.18 Kubernetes v1.18 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version. Working with Kubernetes Objects Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Learn about the Kubernetes object model and how to work with these objects.
https://v1-18.docs.kubernetes.io/docs/concepts/overview/working-with-objects/
2021-07-23T22:47:04
CC-MAIN-2021-31
1627046150067.51
[]
v1-18.docs.kubernetes.io
You are viewing version 2.23 of the documentation, which is no longer maintained. For up-to-date documentation, see the latest version. Armory Platform Compatibility Matrix This page describes the features and capabilities that Armory supports. Note that although Spinnaker™ is part of the Armory Platform, the Armory Platform.. Application metrics for Canary Analysis Application metrics can be ingested by Kayenta to perform Canary Analysis or Automated Canary Analysis (ACA). The following table lists supported app metric providers: Artifacts Artifacts are deployable resources. Stores The following table lists the supported artifact stores: Types The following table lists the supported artifact types: As code solutions Pipelines as Code Pipelines as Code gives you the ability to manage your pipelines and their templates in source control. Supported version control systems The following table lists the supported version control systems: Features The following table lists specific features for Pipelines as Code and their supported versions:: Features The following table lists the Terraform Integration features and their supported versions: Authentication The following table lists the supported authentication protocols: Authorization The following table lists the supported authorization methods: Baking Images The following table lists the supported image bakeries: Browsers Spinnaker’s UI (Deck) works with most modern browsers. Build systems The following table lists the supported CI systems:: Manifest templating The following table lists the supported manifest templating engines: Notifications The following table lists the supported notification systems: Observability The following table lists the supported observabilty providers: Persistent storage Depending on the service, Spinnaker can use either Redis or MySQL as the backing store. The following table lists the supported database and the Spinnaker service: Armory recommends using MySQL as the backing store when possible for production instances of Spinnaker. For other services, use an external Redis instance for production instances of Spinnaker.: Spinnaker Operator Spinnaker Operator and Armory Operator provide Spinnaker users with the ability to install, update, and maintain their clusters via a Kubernetes operator.
https://v2-23.docs.armory.io/docs/armory-platform-matrix/
2021-07-23T21:34:44
CC-MAIN-2021-31
1627046150067.51
[]
v2-23.docs.armory.io
To configure Firebase Push Notifications for CometChat create a firebase project at and follow the steps below: 1.Click on “Add Firebase to your Android app” or “Add another app” option. -<< - Go to Android app settings by clicking on “settings” option in more settings menu(Vertical dotted icon) - Get the “Web API Key” to configure firebase push notification service from CometChat Admin Panel. - Add the “Legacy Server Key” which can be found in the Cloud Messaging section as “Firebase server key” in CometChat Admin Panel under Settings -> Mobile tab - Refer to configure your mobile app to receive the push notifications.. You can subscribe to push channel using the following line: FirebaseMessaging.getInstance().subscribeToTopic("push_channel"); For push notifications in the group, you will get “push_channel” from the response received in success callback of the joinGroup method. Once you subscribe to this channel, you will start receiving push notifications for the group. the push notification for announcements sent from CometChat administration panel. Note: You will receive the push notification in the form of the data payload in the service class extending FirebaseMessagingService. For SDK versions below 7.0.10, you will require processing this data and show notification and for the versions, we are providing Helper method in which you can pass the RemoteMessage object received in onMessageReceived as following: CCNotificationHelper.processCCNotificationData(this,remoteMessage,R.drawable.ic_launcher,R.drawable.ic_launcher_small);</pre>
https://docs.cometchat.com/android-sdk/push-notifications/
2021-07-23T22:01:07
CC-MAIN-2021-31
1627046150067.51
[array(['/wp-content/uploads/2017/08/1.png', None], dtype=object) array(['/wp-content/uploads/2017/08/push2.png', None], dtype=object) array(['/wp-content/uploads/2017/08/push1.png', None], dtype=object) array(['/wp-content/uploads/2017/08/push4.png', None], dtype=object) array(['/wp-content/uploads/2017/08/push5.png', None], dtype=object) array(['/wp-content/uploads/2017/08/push6.png', None], dtype=object) array(['/wp-content/uploads/2018/03/push_image.png', None], dtype=object)]
docs.cometchat.com
- Experiment tracking. We recommend using GLEX rather than Experimentation Module for new experiments. - Implementing an A/B/n experiment using GLEX - Implementing an A/B experiment using Experimentation Module Historical Context: Experimentation Module was built iteratively with the needs that appeared while implementing Growth sub-department experiments, while GLEX was built with the findings of the team and an easier to use API..
https://docs.gitlab.com/ee/development/experiment_guide/
2021-07-23T21:55:00
CC-MAIN-2021-31
1627046150067.51
[]
docs.gitlab.com
Geospatial Partitioning Geospatial partitioning is a method for organizing geospatial data, grouping spatial entities in close proximity to each other. This geospatial grouping allows for accelerated geospatial processing of the grouped data. Geospatial joins, specifically, will take advantage of this process improvement. An increase in performance using this technique will be proportional to the size of the data being joined. Example The goal of the geospatial join in this example is to show flights that landed at (or passed over) JFK airport. It will do this by joining geolocation data sampled at various points of disparate airline flight paths with neighborhood zone data from the NYC NTA data set, and specifically, the JFK airport zones. This example relies on the flights data set, which can be imported into Kinetica via GAdmin. That data will be copied to two tables, one with no geopartitioning and one with geopartitioning, and then increased in size by a magnitude of order to show the performance difference between querying the two. The zone data, which includes JFK airport, will come from the NYC Neighborhood data file. Setup First, create the two flight data tables, with and without geopartitioning. The partitioned table is partitioned on the geohash of the flight path data points with a precision of 2, which partitions the data into 135 different groups. Then, load them up with 10 times the original flight data. Lastly, import the NYC neighborhood data Geo-Join Now that the tables are ready, the same geospatial join can be run, one using the geospatially partitioned table, and one using the non-partitioned table. Without Geopartitioning Non-partitioned query: Sample runtime: Query Execution Time: 0.453 s With Geopartitioning Partitioned query: Sample runtime: Query Execution Time: 0.196 s Conclusion & Cautions In this example, the same query, run against a geopartitioned table, is able to execute more than twice as fast as when run against a table without geopartitioning. Note that only the point data table was partitioned, using a geohash precision of 2. This will not be the optimal configuration in every case. This geopartitioning technique may require testing several configurations to determine the one that provides optimal performance for the given data set and query. For each of the tables involved in the join, a decision must be made to partition the table (or not), and if so, what precision to use for the geohash used to partition the table. The size & geospatial distribution of both data sets, as well as the extent to which they overlap, will have an impact on the performance of the query and the configuration required to process it in the best case.
https://docs.kinetica.com/7.1/tuning/cs/geo/index.html
2021-07-23T21:17:01
CC-MAIN-2021-31
1627046150067.51
[]
docs.kinetica.com
The available IOPS counter identifies the remaining number of IOPS that can be added to a node or an aggregate before the resource reaches its limit. The total IOPS that a node can provide is based on the physical characteristics of the node—for example, the number of CPUs, the CPU speed, and the amount of RAM. The total IOPS that an aggregate can provide is based on the physical properties of the disks—for example, a SATA, SAS, or SSD disk. While the performance capacity free counter provides the percentage of a resource that is still available, the available IOPS counter tells you the exact number of IOPS (workloads) can be added to a resource before reaching the maximum performance capacity. For example, if you are using a pair of FAS2520 and FAS8060 storage systems, a performance capacity free value of 30% means that you have some free performance capacity. However, that value does not provide visibility into how many more workloads you can deploy to those nodes. The available IOPS counter may show that you have 500 available IOPS on the FAS8060, but only 100 available IOPS on the FAS2520. A sample latency versus IOPS curve for a node is shown in the following figure. The maximum number of IOPS that a resource can provide is the number of IOPS when the performance capacity used counter is at 100% (the optimal point). The operational point identifies that the node is currently operating at 100K IOPS with latency of 1.0 ms/op. Based on the statistics captured from the node, Unified Manager determines that the maximum IOPS for the node is 160K, which means that there are 60K free or available IOPS. Therefore, you can add more workloads to this node so that your systems are used more efficiently.
https://docs.netapp.com/ocum-99/topic/com.netapp.doc.onc-um-perf-ag/GUID-9A7270C6-13FC-4624-8D35-2B2EB133CA44.html
2021-07-23T22:27:00
CC-MAIN-2021-31
1627046150067.51
[]
docs.netapp.com
Please contribute to the REMnux collection of Docker images of malware analysis applications. You'll get a chance to experiment with Docker, become a master at setting up an application of your choice, and expand the set of tools that others can run for examining malicious software. To get started, review: The details in this section below. Before creating the Dockerfile for the application you'd like to contribute to the REMnux toolkit, reach out to Lenny Zeltser, the primary REMnux maintainer, to confirm that the application is a fit for REMnux. A properly-formatted Dockerfile describes the steps necessary to build and configure your application inside a Docker container in a repeatable and unattended manner. To get a sense for the structure of such files, browse the REMnux repository of Dockerfiles on Github. To explain how to build such files, we'll use the JSDetox Dockerfile as an example. The beginning of your Dockerfile should include comments that state which application is included in the image, who created the app and where it can be obtained in a traditional form. The comments should explain how the use of the image should run it. For instance: # This Docker image encapsulates the JSDetox malware analysis tool by @sven_t# from To run this image after installing Docker, use the following command:# sudo docker run -d --rm --name jsdetox -p 3000:3000 remnux/jsdetox# Then, connect to using your web browser. REMnux images typically use a minimal Docker image of Ubuntu 18.04 as a starting point, as designated by the FROM directive below. The LABEL directive specify meta data such as the maintainer and version of the Dockerfile: FROM ubuntu:18.04LABEL maintainer="Lenny Zeltser (@lennyzeltser,)"LABEL updated="1 May 2020" The USER directive specifies the user inside the container that should perform the installation steps ("root"). The RUN directive specifies the commands to run inside the container to install the software. Your Dockerfile file should include the apt-get update command, followed by apt-get install -y and a listing of the Ubuntu packages the application requires.The starting point for the image is a minimal Ubuntu installation, so assume that a given package is absent unless you explicitly install it: USER rootRUN apt-get update && apt-get install -y \git \ruby \ruby-dev \bundler \zlib1g-dev \build-essential && \rm -rf /var/lib/apt/lists/* Note that the RUN command above links several commands together using && and employs \ to break this sequence of commands into multiple lines for readability. We're linking several commands like this to slightly minimize the size of the resulting Docker image file. This is also the reason why we include the rm command to get rid of the package listing. The followng RUN directive sets up the non-root user creatively named "nonroot", so that commands and applications that don't require root provileges have a more restricted environment within which to run: RUN groupadd -r nonroot && \useradd -r -g nonroot -d /home/nonroot -s /sbin/nologin -c "Nonroot User" nonroot && \mkdir /home/nonroot && \chown -R nonroot:nonroot /home/nonroot The next set of directives tells Docker to start running commands using the newly-set up "nonroot" user, defines the working directory to match that user's home directory and retrieves the code for the application we're installing (JSDetox, in this case): USER nonrootWORKDIR /home/nonrootRUN git clone The following instructions will install the application using the bundle install command, according the JSDetox installation instructions. These steps need to run as "root" to have the ability to copy the application's files into protected locations: USER rootWORKDIR /home/nonroot/jsdetoxRUN sed "s/, '0.9.8'/, '0.12.3'/g" -i GemfileRUN bundle install The final set of directives below tells Docker to switch back to using the "nonroot" user and sets the working directory to the location from which JSDetox should be launched. It also specifies which command Docker should run when this image is launched without any parameters: USER nonrootEXPOSE 3000WORKDIR /home/nonroot/jsdetoxCMD ./jsdetox -l $HOSTNAME 2>/dev/null By default, JSDetox listens on localhost. To give us the opportunity to connect to JSDetox from outside of its container, the command above launches the tool with the -l parameter and specifies the $HOSTNAME varilable. This environment variable is automatically defined to match the hostname that Docker will assign when this container runs, which will allow JSDetox to listen on the network interface accessible from our underlying host. It’s difficult to create a Dockerfile, such as the one we reviewed above, in one step. Inevitably, some command will run in an unexpected manner, preventing the application from installing properly. Before documenting your steps in Dockerfile, consider launching the base Ubuntu container like this: docker run --rm -it ubuntu:18.04 bash Then, manually type and write down the commands into the container's shell to install the desired application. Once you've validated that a specific sequence of commands works, start building a Dockerfile by adding your instructions one or two at a time to validate that they work as intended. Once you've created a Dockerfile that contains the desired directives, go to the directory where the file is present and run the following command, where "image-name" is he name you'd like to assign to the image file you're building: docker build -t=image-name . After Docker builds the image, you can run it using the following command to get a shell in the container where your application has been installed: docker run --rm -it image-name bash Of course, "image-name" in the command above should correspond to the name you've assigned to the image. The --rm parameter directs Docker to automatically remove the container once it finishes running. This gets rid of any changes the application may have made to its local environment when it ran, but does not remove the cached image file that represents the app on your system. The -it parameter requests that Docker open an interactive session to the container so you can interact with it. Once you have built and tested your Dockerfile, share it with Lenny Zeltser, so he can review it and, if appropriate, add your contribution to the REMnux repository. The container will be isolated from the host system: by default it will be able to communicate over the network in the outbound direction, but won't accept inbound traffic. Also, if the container is invoked with the --rm parameter, its contents will disappear after it stops running. When building the image, anticipate the user's need to communicate with the app inside the container over the network or to pass files in and out of the container. In the JSDetox example above, the application listens on TCP port 3000. In its default configuration, JSDetox listens on localhost, which would make its port inaccessible from outside its Docker container. This is why we launched JSDetox with the -l $HOSTNAME parameter. This directed the application to listen on the network interface that could be accessed from outside the container. Unless the user explicitly requests access to the container's port when launching its image, no ports will be accessible from the underlying system. Fortunately, Docker allows us to use the -p parameter to specify that a specific port within the container should be accessible from outside the container. For example, to access JSDetox’ port 3000, the user needs to specify -p 3000:3000. This maps the container’s port 3000 to the underlying host’s port 3000, allowing the user to communicate with JSDetox by connecting to using a web browser. There is no need to share files with JSDetox inside the container by using the file system, because this application interacts with the user through the web browser. In contrast, some files expect the user to provide input or share output via the file system. Docker supports the -v parameter to share a directory between the underlying host and the container. For example, let’s say we wanted to share a folder with the container running Rekall, which is available in the REMnux repository on Docker Hub. If the memory image file that you’d like to analyze is on your underlying host in the ~/files directory, you could share that directory with the Rekall container by specifying -v ~/files:/home/nonroot/files when running the application’s image: sudo docker run --rm -it -v ~/files:/home/nonroot/files remnux/rekall bash This maps the local ~/files directory to the /home/nonroot/files directory inside the container. The Rekall image is built to run the user-designated command (e.g., bash) as the user "nonroot". To ensure that the non-root user has access to the underlying hosts ~/files directory, the user of the app will need to make that directory world-accessible (i.e., chmod a+xwr ~/files) before launching the container.
https://docs.remnux.org/get-involved/add-or-update-tools/contribute-dockerfile
2021-07-23T22:12:50
CC-MAIN-2021-31
1627046150067.51
[]
docs.remnux.org
1. Go to Settings → Billing → Billing & Payment info. 2. Scroll down and Click 'Pause or cancel'. 3. Select 'Delete account'. You'll be redirected to a page where we'll ask you to specify the reason for deleting your account. Your feedback means a lot to us and helps us grow. 4. Select the reason for your cancellation and click DELETE ACCOUNT. Alternatively, you can also send us a message, preferably to [email protected]. We will cancel any future payments and your account will remain active until your billing period is over. After that, we'll delete your account. Q: I cancelled my subscription but I changed my mind and would like to keep the account. What should I do? Log in to your account and click the button " I WANT TO KEEP IT". You can also manage your subscription from your Billing page.
https://docs.woodpecker.co/en/articles/5268076-deleting-account
2021-07-23T22:13:31
CC-MAIN-2021-31
1627046150067.51
[array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/341364536/a88734632f35d2cc6e1e55fa/file-DRE2hzVjEB.png', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/341364538/12d033febbd1230bfb4d4b11/file-psZPoBVjbM.jpg', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/341364543/62f231d2afcd4f934d9c8020/file-7jNlHhIvn5.png', None], dtype=object) array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/341364548/c84aea9c7743886f25715d6d/file-2Ici9fQjT2.png', None], dtype=object) ]
docs.woodpecker.co
We need updated screenshots. Feel free to follow the edit this page link and contribute. Imagine you need to sell a DrupalCon t-shirt. This t-shirt comes in different sizes and colors. Each combination of size and color has its own SKU, so you know which color and size the customer has purchased and you can track exactly how many of each combination you have in stock. Color and size are product attributes. Blue and small are product attribute values, belonging to the mentioned attributes. The combination of attribute values (with a SKU and a price) is called a product variation. These variations are grouped inside a product. For our t-shirt we need two attributes: color and size. Let's start by creating the color attribute. Go to the Drupal Commerce administration page and visit the Product Attributes link. Click on the Add product attribute link to create an attribute. After you have created the color attribute, we need to define at least one value. Normally we would simply say the color is "blue" or "red" but sometimes you might need to further define the attribute using fields. Adding fields is covered in detail later on in the documentation. The product attribute values user interface allows creating and re-ordering multiple values at the same time and a very powerful translation capability: Next, you will need to add the attribute to the product variation type. You can find these at /admin/commerce/config/product-variation-types and you just need to add/edit a product variation type that requires your new attribute. After you have added "Color" and the various colors your t-shirts are available in, the next step is to add that "color" attribute to our product. Store administrators can do this on the product variation type form, the checkbox in the last step automatically created entity referenced fields as needed: Product attributes are so much more than a word. Often times they represent a differentiation between products that is useful to call out visually for customers. The fieldable attribute value lets the information architect decide what best describes this attribute. Like any other fieldable entity, you can locate the list of attribute bundles and click edit fields: /admin/commerce/product-attributes Add a field as you would expect. Most fields are supported and will automatically show up when you go to add attribute values: Editing the attribute values is pretty easy. Simply locate the attribute type that has the values you want to edit: /admin/commerce/product-attributes And click "edit" and you will be taken to a screen to edit all the attributes of that type. After creating attributes, the product variation type needs to know that it uses the attribute. The product variations are at /admin/commerce/config/product-variation-types and once you've clicked on the attribute you want... Fields are added to the variation type that can then be modified. By default, all attribute fields are required. If your attribute is optional (perhaps some of the drupalcon t-shirts only come in blue), then you can locate the manage fields of your particular product variation type and make the color attribute optional by following these steps: /admin/commerce/config/product-variation-types Click the drop down next to the variation type you want and click "manage fields" Found errors? Think you can improve this documentation? edit this page
https://docs.drupalcommerce.org/commerce2/user-guide/products/configure-product-attributes
2021-07-23T22:10:28
CC-MAIN-2021-31
1627046150067.51
[]
docs.drupalcommerce.org
Access Control List Tutorial From Joomla! Documentation Revision as of 11:22, 14 October 2009 by Dextercowley (Talk | contribs)}}. Note: This is a draft version written for the October 2009 alpha2 release of Joomla! version 1.6. Since version 1.6 is still under active development, screenshots and other information discussed in this article may change before the final release of version 1.6. However, it is expected that the basic concepts here will not change. Contents ACL Defined ACL stands for Access Control List. According to Wikipepdia, Users, Groups, and Access Levels With the definition in mind, let's look at how we set up the ACL for our site in version 1.6. If we look at the User Manager for 1.6, we can see the major changes from version 1.5, summarized in the table below. chart below shows what has changed between versions 1.5 and 1.6. ACL Examples Customising Back End Permissions Joomla! version 1.6 will install with the same familiar back-end permissions as that of version 1.5. However, with 1.6, you can easily change these to suit the needs of your site. It is important to understand that the permissions for each action are inherited from the level above.. Let's go through this to see how it works. The first group, Public, has nothing set for any actions. The default in this case is for no permissions. So, as you would expect, the Public group has no special permissions. The second group is Registered. This. The next group is Administrator. Notice that we grant members of this group. The next group, Manager, is a "child" group of the Administrator group. So, by default, the Manager group inherits all of the same permissions as the Administrator group. The next two groups, Park Rangers and Publishers, both are children of the Registered group and inherit that group's default permissions. The Editor group is a child of the Publishers group, and the Authors group is a child of the Editor group. Since these groups have no back-end login permissions, we will discuss them later. The last group is the Super Users group. The reason is that this group has the Allow permission for the Admin action. Because of this, members of this group have super user permissions throughout the site. They are the only users who can access and edit values on this not to allow members of Administrator to delete objects or change their state, you would change their permissions in these columns to Inherit (or Deny). In this case, you would also need to change the Super Administrator permissions for these columns to Allow, else no one would be able to perform Delete or Edit State actions. Now, let's continue to see how the default back-end permissions for version 1.6 mimic the permissions for version 1.5. The Super Users group in 1.6 is equivalent to the Super Administrator group in 1.5. From this screen, Manger Manager can do all actions for articles except the Admin action and Administrator can do everything. So both groups can create, delete, edit, and change the state of articles, but a Manager cannot see or change the default settings for articles. Now, let's look at the groups Publisher, Editor, and Author and see how their permissions are set. All of these groups have Inherit permission for Admin and Manage. Since their default for these is blank, they do not have permission for these. So none of these groups are allowed to work with articles on the back end. Publisher has Allow permission for Create, Edit, and Edit State. This means that Publishers, by default, can add new articles, edit existing articles, and change the state of articles. Note that, by default, Publishers can not delete articles. Editors inherit the same permissions as Publishers except that they have Deny permission for the Edit State action. So Editors can Create and Edit but are not allowed to Edit State, which in this case means to Publish, Unpublish, Trash, or Archive articles. Finally, Authors inherit the Editor permissions except for the Deny in Edit. So Authors can only Create new articles and cannot do any other action. It is important to remember that these are only default settings that will apply to all articles by default. However, as we will see, they can easily be changed for specific Categories. Question: Does a group inherit permission from it's parent group or from a higher level. Example: Manager group has Deny permission for Manage action in Users->Options->Permission. Does Administrator inherit this Deny (since Manager is Administrator's parent group) or does Administrator inherit permission from Global Configuration->Permissions? In other words, do you inherit down component tree (from Site->User Manager) or down Group tree (from Manager->Administrator)? Inherit down the group tree. Front End ACL
https://docs.joomla.org/index.php?title=J2.5:Access_Control_List_Tutorial&oldid=15898
2015-04-18T09:26:51
CC-MAIN-2015-18
1429246634257.45
[]
docs.joomla.org
language-legesher-PROGRAMMINGLANGUAGE package.json: within this file, the tree-sitter grammar that Atom pulls from to parse the programming language correctly, will have to be changed to Legesher's tree-sitter counterpart. package-lock.jsonfile as well as the /node_modulesfolder for a clean rebuild. Running npm installafter such changes will create updated versions of those files. grammars/*: within the grammars folder there will be several files. tree-sitter-PROGRAMMINGLANGUAGE.cson: This file is the necessary file in order to create the language support in Atom. Update the file name to follow the naming convention for Legesher projects: tree-sitter-legesher-PROGRAMMINGLANGUAGE.cson. We will have to first set up templating within the language here. python.cson: This will need to have more changes made. I've found the best way to start is to search for the keyword, storage, logical, '\\b(and template the text in the match. settings/: In the settings folder there should be at least one cson file that you will have to rename following the Legesher naming conventions: language-legesher-PROGRAMMINGLANGUAGE.cson. language-legesher-PROGRAMMINGLANGUAGE.cson: This should be a pretty simple template for us to go through. This is an example from the python language. snippets/: The snippets folder can contain any number of files, but it is essentially the shortcuts for the language itself when it's being worked on within the editor. This will become more of a priority as Legesher evolves, but in the meantime, we'll keep it as it is. This more depends on the language's translation being accurate for more than just the keywords. spec/: TESTS TESTS TESTS, we all love testing and making sure everything is working correctly. Go through each of these files and make sure the proper things are templated out. Make sure to also go through the fixtures/folder for more tests to template out. apm testto run the language tests in Atom.
https://docs.legesher.io/language-legesher-python/language-guide
2022-05-16T14:44:30
CC-MAIN-2022-21
1652662510138.6
[]
docs.legesher.io
Building your own algorithm container With Amazon SageMaker, you can package your own algorithms that can than be trained and deployed in the SageMaker environment. This notebook will guide you through an example that shows you how to build a Docker container for SageMaker and use it for training and inference. By packaging an algorithm in a container, you can bring almost any code to the Amazon SageMaker environment, regardless of programming language, environment, framework, or dependencies. *Note:* SageMaker now includes a pre-built scikit container. We recommend the pre-built container be used for almost all cases requiring a scikit algorithm. However, this example remains relevant as an outline for bringing in other libraries to SageMaker as your own container. Building your own algorithm container When should I build my own algorithm container? - - - Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker Testing your algorithm on your local machine or on an Amazon SageMaker notebook instance Part 2: Using your Algorithm in Amazon SageMaker - - Upload the data for training Create an estimator and fit the model - or I’m impatient, just let me see the code! When should I build my own algorithm container? You may not need to create a container to bring your own code to Amazon SageMaker. When you are using a framework (such as Apache MXNet or TensorFlow) that has direct support in SageMaker, you can simply supply the Python code that implements your algorithm using the SDK entry points for that framework. This set of frameworks is continually expanding, so we recommend that you check the current list if your algorithm is written in a common machine learning environment. Even if there is direct SDK support for your environment or framework, you may find it more effective to build your own container. If the code that implements your algorithm is quite complex on its own or you need special additions to the framework, building your own container may be the right choice. If there isn’t direct SDK support for your environment, don’t worry. You’ll see in this walk-through that building your own container is quite straightforward. Permissions Running this notebook requires permissions in addition to the normal SageMakerFullAccess permissions. This is because we’ll creating Here, we’ll show how to package a simple Python example which showcases the decision tree algorithm from the widely used scikit-learn machine learning package. The example is purposefully fairly trivial since the point is to show the surrounding structure that you’ll want to add to your own code so you can train and host it in Amazon SageMaker. The ideas shown here will work in any language or environment. You’ll need to choose the right tools for your environment to serve HTTP requests for inference, but good HTTP environments are available in every language these days. In this example, we use a single image to support training and hosting. This is easy because it means that we only need to manage one image and we can set it up to do everything. Sometimes you’ll want separate images for training and hosting because they have different requirements. Just separate the parts discussed below into separate Dockerfiles and build two images. Choosing whether to have a single image or two images is really a matter of which is more convenient for you to develop and manage. If you’re only using Amazon SageMaker for training or hosting, but not both, there is no need to build the unused functionality concept, but they are not difficult, as you’ll see here. you set up your program is the way it runs, no matter where you run it. Docker is more powerful than environment managers like conda or virtualenv because (a) it is completely language independent and (b) it comprises your whole operating environment, including startup commands, environment variable, etc. In some ways, a Docker container is like a virtual machine, but it is much lighter weight. For example, a program running in a container can start in less than a second and many containers can run on the same physical machine or virtual machine instance. Docker uses a simple file called a Dockerfile to specify how the image is assembled. We’ll see an example of that below. You can build your Docker images based on Docker images built by yourself or others, which can simplify things quite a bit. Docker has become very popular in the programming and devops communities for its flexibility and well-defined specification of the code to be run. It is the underpinning of many services built in the past few years, such as Amazon ECS. Amazon SageMaker uses Docker to allow users to train and deploy arbitrary algorithms. In Amazon SageMaker, Docker containers are invoked in a certain way for training and a: In the example here, we don’t define an ENTRYPOINTin the Dockerfile so Docker will run the command trainat training time and serveat serving time. In this example, we define these as executable Python scripts, but they could be any program that we want to start in that environment. If you specify a program as an ENTRYPOINTin the Dockerfile, that program will be run at startup and its first argument will be trainor serve. The program can then look at that argument and decide what to do. If you are building separate containers for training and hosting (or building only for one or the other), you can define a program as an ENTRYPOINTin the Dockerfile and ignore (or verify) the first argument passed in. Running your container during training When Amazon SageMaker runs training, your train script is run just like a regular Python program. will always be strings, so you may need to convert them. resourceConfig.jsonis a JSON-formatted file that describes the network layout used for distributed training. Since scikit-learn doesn’t support distributed training, we’ll ignore it here. /opt/ml/input/data/<channel_name>/(for File mode) contains the input data for that channel. The channels are created based on the call to CreateTrainingJob but it’s generally important that channels match what the algorithm expects. The files for each channel will will package any files in this directory into a compressed tar archive file. This file will be available at the S3 location returned in the DescribeTrainingJobresult. /opt/ml/outputis a directory where the algorithm can write a file failurethat describes why the job failed. The contents of this file will be returned in the FailureReasonfield of the DescribeTrainingJobresult. For jobs that succeed, there is no reason to write this file as it will be ignored. Running your container during hosting Hosting has a very different model than training because hosting is responding to inference requests that come in via HTTP. In this example, we use our recommended Python serving stack to provide robust and scalable serving of inference requests: This stack is implemented in the sample code here and you can mostly just leave it alone. Amazon SageMaker uses two URLs in the container: /pingwill receive will be passed in as well. The container will have the model files in the same place they were written during training: /opt/ml `-- model `-- <model files> The parts of the sample container In the container directory are all the components you need to package the sample algorithm for Amazon SageMager: . |-- Dockerfile |-- build_and_push.sh `-- decision_trees |-- nginx.conf |-- predictor.py |-- serve |-- train `-- wsgi.py Let’s discuss each of these in turn: ``Dockerfile`` describes how to build your Docker container image. More details below. ``build_and_push.sh`` is a script that uses the Dockerfile to build your container images and then pushes it to ECR. We’ll invoke the commands directly later in this notebook, but you can just copy and run the script for your own algorithms. ``decision_trees`` is the directory which contains the files that will be installed in the container. ``local_test`` is a directory that shows how to test your new container on any computer that can run Docker, including an Amazon SageMaker notebook instance. Using this method, you can quickly iterate using small datasets to eliminate any structural bugs before you use the container with Amazon SageMaker. We’ll walk through local testing later in this notebook. In this simple application, we only install five files in the container. You may only need that many or, if you have many supporting routines, you may wish to install more. These five show the standard structure of our Python containers, although you are free to choose a different toolset and therefore could have a different layout. If you’re writing in a different programming language, you’ll certainly have a different layout depending on the frameworks and tools you choose. The files that we’ll put in the container are: ``nginx.conf`` is the configuration file for the nginx front-end. Generally, you should be able to take this file as-is. ``predictor.py`` is the program that actually implements the Flask web server and the decision tree predictions for this app. You’ll want to customize the actual prediction parts to your application. Since this algorithm is simple, we do all the processing here in this file, but you may choose to have separate files for implementing your custom logic. ``serve`` is the program started when the container is started for hosting. It simply launches the gunicorn server which runs multiple instances of the Flask app defined in predictor.py. You should be able to take this file as-is. ``train`` is the program that is invoked when the container is run for training. You will modify this program to implement your training algorithm. ``wsgi.py`` is a small wrapper used to invoke the Flask app. You should be able to take this file as-is. In summary, the two files you will probably want to change for your application are train and predictor.py.. For the Python science stack, we will start from a standard Ubuntu installation and run the normal tools to install the things needed by scikit-learn. Finally, we add the code that implements our specific algorithm to the container and set up the right environment to run under. Along the way, we clean up extra space. This makes the container smaller and faster to start. Let’s look at the Dockerfile for the example: [ ]: !cat container/Dockerfile decision_trees_sample to build the image decision_trees_sample. This code looks for an ECR repository in the account you’re using and the current default region (if you’re using a SageMaker notebook instance, this will be the region where the notebook instance was created). If the repository doesn’t exist, the script will create it. [ ]: %%sh # The name of our algorithm algorithm_name=sagemaker-decision-trees cd container chmod +x decision_trees/train chmod +x decision_trees/serve-password --region ${region}|docker login --username AWS --password-stdin ${fullname} # Build the docker image locally with the image name and then push it to ECR # with the full name. docker build -t ${algorithm_name} . docker tag ${algorithm_name} ${fullname} docker push ${fullname} Testing your algorithm on your local machine or on an Amazon SageMaker notebook instance While you’re first packaging an algorithm use with Amazon SageMaker, you probably want to test it yourself to make sure it’s working right. In the directory container/local_test, there is a framework for doing this. It includes three shell scripts for running and using the container and a directory structure that mimics the one outlined above. The scripts are: train_local.sh: Run this with the name of the image and it will run training on the local tree. For example, you can run $ ./train_local.sh sagemaker-decision-trees. It will generate a model under the /test_dir/modeldirectory. You’ll want to modify the directory test_dir/input/data/...to be set up with the correct channels and data for your algorithm. Also, you’ll want to modify the file input/config/hyperparameters.jsonto have the hyperparameter settings that you want to test (as strings). serve_local.sh: Run this with the name of the image once you’ve trained the model and it should serve the model. For example, you can run $ ./serve_local.sh sagemaker-decision-trees. It will run and wait for requests. Simply use the keyboard interrupt to stop it. predict.sh: Run this with the name of a payload file and (optionally) the HTTP content type you want. The content type will default to text/csv. For example, you can run $ ./predict.sh payload.csv text/csv. The directories as shipped are set up to test the decision trees sample algorithm presented here. Part 2: Using your Algorithm in Amazon SageMaker Once you have your container packaged, you can use it to train models and use the model for hosting or batch transforms. Let’s do that with the algorithm we made above. Set up the environment Here we specify a bucket to use and the role that will be used for working with SageMaker. [ ]: # S3 prefix prefix = "DEMO-scikit-byo-iris" # Define IAM role import boto3 import re import os import numpy as np import pandas as pd from sagemaker import get_execution_role role = get_execution_role() Create the session The session remembers our connection parameters to SageMaker. We’ll use it to perform all of our SageMaker operations. [ ]: import sagemaker as sage from time import gmtime, strftime sess = sage.Session() Upload the data for training When training large models with huge amounts of data, you’ll typically use big data tools, like Amazon Athena, AWS Glue, or Amazon EMR, to create your data in S3. For the purposes of this example, we’re using some the classic Iris dataset, which we have included. We can use use the tools provided by the SageMaker Python SDK to upload the data to a default bucket. [ ]: WORK_DIRECTORY = "data" data_location = sess.upload_data(WORK_DIRECTORY, key_prefix=prefix) Create an estimator and fit the model In order to use SageMaker to fit our algorithm, we’ll create an Estimator that defines how to use the container to train. This includes the configuration we need to invoke SageMaker training: The container name. This is constructed as in the shell commands above. The role. As defined above. The instance count which is the number of machines to use for training. The instance type which is the type of machine to use for training. The output path determines where the model artifact will be written. The session is the SageMaker session object that we defined above. Then we use fit() on the estimator to train against the data that we uploaded above. [ ]: account = sess.boto_session.client("sts").get_caller_identity()["Account"] region = sess.boto_session.region_name image = "{}.dkr.ecr.{}.amazonaws.com/sagemaker-decision-trees:latest".format(account, region) tree = sage.estimator.Estimator( image, role, 1, "ml.c4.2xlarge", output_path="s3://{}/output".format(sess.default_bucket()), sagemaker_session=sess, ) tree.fit(data_location) Hosting your model You can use a trained model to get real time predictions using HTTP endpoint. Follow these steps to walk you through the process. Deploy the model Deploying the model to SageMaker hosting just requires a deploy call on the fitted model. This call takes an instance count, instance type, and optionally serializer and deserializer functions. These are used when the resulting predictor is created on the endpoint. [ ]: from sagemaker.predictor import csv_serializer predictor = tree.deploy(1, "ml.m4.xlarge", serializer=csv_serializer) Choose some data and use it for a prediction In order to do some predictions, we’ll extract some of the data we used for training and do predictions against it. This is, of course, bad statistical practice, but a good way to see how the mechanism works. [ ]: shape = pd.read_csv("data/iris.csv", header=None) shape.sample(3) [ ]: # drop the label column in the training set shape.drop(shape.columns[[0]], axis=1, inplace=True) shape.sample(3) [ ]: import itertools a = [50 * i for i in range(3)] b = [40 + i for i in range(10)] indices = [i + j for i, j in itertools.product(a, b)] test_data = shape.iloc[indices[:-1]] Prediction is as easy as calling predict with the predictor we got back from deploy and the data we want to do predictions with. The serializers take care of doing the data conversions for us. [ ]: print(predictor.predict(test_data.values).decode("utf-8")) Optional cleanup When you’re done with the endpoint, you’ll want to clean it up. [ ]: sess.delete_endpoint(predictor.endpoint) Run Batch Transform Job You can use a trained model to get inference on large data sets by using Amazon SageMaker Batch Transform. A batch transform job takes your input data S3 location and outputs the predictions to the specified S3 output folder. Similar to hosting, you can extract inferences for training data to test batch transform. Create a Transform Job We’ll create an Transformer that defines how to use the container to get inference results on a data set. This includes the configuration we need to invoke SageMaker batch transform: The instance count which is the number of machines to use to extract inferences The instance type which is the type of machine to use to extract inferences The output path determines where the inference results will be written [ ]: transform_output_folder = "batch-transform-output" output_path = "s3://{}/{}".format(sess.default_bucket(), transform_output_folder) transformer = tree.transformer( instance_count=1, instance_type="ml.m4.xlarge", output_path=output_path, assemble_with="Line", accept="text/csv", ) We use tranform() on the transfomer to get inference results against the data that we uploaded. You can use these options when invoking the transformer. The data_location which is the location of input data The content_type which is the content type set when making HTTP request to container to get prediction The split_type which is the delimiter used for splitting input data The input_filter which indicates the first column (ID) of the input will be dropped before making HTTP request to container [ ]: transformer.transform( data_location, content_type="text/csv", split_type="Line", input_filter="$[1:]" ) transformer.wait() For more information on the configuration options, see CreateTransformJob API View Output Lets read results of above transform job from s3 files and print output [ ]: s3_client = sess.boto_session.client("s3") s3_client.download_file( sess.default_bucket(), "{}/iris.csv.out".format(transform_output_folder), "/tmp/iris.csv.out" ) with open("/tmp/iris.csv.out") as f: results = f.readlines() print("Transform results: \n{}".format("".join(results)))
https://sagemaker-examples.readthedocs.io/en/latest/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.html
2022-05-16T14:59:47
CC-MAIN-2022-21
1652662510138.6
[]
sagemaker-examples.readthedocs.io
Uploading task on Control Room for deployment You must upload a task to the Control Room to make it available for deployment on a Bot Runner machine and ready to be scheduled and run on a regular schedule. Procedure - Log in to the Enterprise Client. - Select the folder from the Tasks pane that contains the task you want to upload. - Select the task from the My Tasks section and click Upload on the toolbar.Or, You can right-click the task and select Upload from the menu.
https://docs.automationanywhere.com/ko-KR/bundle/enterprise-v11.3/page/enterprise/topics/bot-insight/user/uploading-task-on-enterprise-control-room-for-deployment.html
2022-05-16T15:56:19
CC-MAIN-2022-21
1652662510138.6
[]
docs.automationanywhere.com
v2.105.0 (February 10, 2022)¶. Fixed: Clear the search conditions when the administrator clicks on the Accounts menu item and display the list of all accounts. Subscription status banner is not to be shown to Launchpad users. Additional reliability fixes. Frame Gateway Added: Support for shared VPC in GCP. Fixed: Reliability fixes and performance optimizations. Operating Systems Added: Support for Ubuntu 20.04 LTS workload virtual machines on AHV, AWS, Azure, and GCP.
https://docs.frame.nutanix.com/releases/v2-105-0.html
2022-05-16T16:10:52
CC-MAIN-2022-21
1652662510138.6
[]
docs.frame.nutanix.com
Developer Dashboard If you want to contribute to our ecosystem of Apps, Extensions, and Articles you can do so by following these steps: - Request access to the Developer Program in Account Settings. - After approval, you can go into your Developer Dasboard and submit your App, Extension, or Article for approval and publication. We regularly showcase great work in our community which can lead to free stuff and more formal work with us. Any questions? Reach out to us on our Slack channel.
https://docs.cosmicjs.com/dashboard/developer-dashboard
2022-05-16T16:23:22
CC-MAIN-2022-21
1652662510138.6
[]
docs.cosmicjs.com
A newer version of this page is available. Switch to the current version. ImageGalleryFullscreenViewerSettings Class Contains fullscreen viewer specific settings. Namespace: DevExpress.Web Assembly: DevExpress.Web.v20.2.dll Declaration public class ImageGalleryFullscreenViewerSettings : PropertiesBase Public Class ImageGalleryFullscreenViewerSettings Inherits PropertiesBase Related API Members The following members accept/return ImageGalleryFullscreenViewerSettings objects: Remarks These settings can be accessed via the ASPx
https://docs.devexpress.com/AspNet/DevExpress.Web.ImageGalleryFullscreenViewerSettings?v=20.2
2022-05-16T16:12:20
CC-MAIN-2022-21
1652662510138.6
[]
docs.devexpress.com
Link¶ The link element provides a button widget that refers a defined link like a website or script. Configuration¶ Show label: Enables or disables text (title) next to the button (Default: true). Title: Title of the element. The title will be listed in “Layouts” and allows to distinguish between different buttons. It will be indicated if “Show label” is activated. Tooltip: Text, that will be indicated if the mouse hovers over the button for a longer time. Icon: Symbol of the button. Based on a CSS class. Target URL Reference to a website or a script.: Example¶ It is possible to create and adjust different buttons with different functions. Buttons can refer to features which are included in the Content area. For example, it is possible to create a Legend button or Line- and/or Area Ruler buttons: Link to a Webpage¶ First, you have to select the link element by clicking on the +- symbol in the Toolbar section in the Layouts tab. After the selection of the link element, the “Add element - Link” dialog box opens, where you can configure the element. You can set the name of the link button in the field Title. This title will be displayed as label next to the icon if Show label is active. In the field Tooltip, you can define a text that will be displaced as tooltip during hovering over the button. You can choose from a variety of icons to set the icon for your link button. YAML-Definition:¶ title: Link # title class: Mapbender\CoreBundle\Element\Button tooltip: Visit the Mapbender Website # text to use as tooltip icon: iconInfoActive # icon CSS class to use label: true # false/true to label the button, default is true click: # refer to a website or script
https://docs.mapbender.org/current/en/functions/misc/link.html
2022-05-16T14:59:34
CC-MAIN-2022-21
1652662510138.6
[]
docs.mapbender.org
Long-term optical tracking is one of most important issue for many computer vision applications in real world scenario. The development in this area is very fragmented and this API is an unique interface useful for plug several algorithms and compare them. This work is partially based on [171] and [117] . This appearence [220] To see how API works, try tracker demo: If you want. Also, you should decide upon the name of the tracker, is it will be known to user (the current style suggests using all capitals, say MIL or BOOSTING) –we'll call it a "name". [171] table I (ME) for further information. Fill "modelUpdateImpl" in order to update the model, see [171] : : To try your tracker you can use the demo at The first argument is the name of the tracker and the second is a video source. Represents the model of the target at frame \(k\) (all states and scores) See [171] The set of the pair \(\langle \hat{x}^{i}_{k}, C^{i}_{k} \rangle\) Represents the estimate states for all frames. [171] \(x_{k}\) is the trajectory of the target up to time \(k\) Implements cv::CvFeatureEvaluator.
https://docs.opencv.org/4.0.0-rc/d9/df8/group__tracking.html
2022-05-16T15:46:06
CC-MAIN-2022-21
1652662510138.6
[]
docs.opencv.org
Manage cluster vertical scaling (scale up) in Azure Data Explorer to accommodate changing demand Sizing a cluster appropriately is critical to the performance of Azure Data Explorer. A static cluster size can lead to under-utilization or over-utilization, neither of which is ideal. Since demand on a cluster can’t be predicted with absolute accuracy, a better approach is to scale a cluster, adding and removing capacity and CPU resources with changing demand. There are two workflows for scaling an Azure Data Explorer cluster: - Horizontal scaling, also called scaling in and out. - Vertical scaling, also called scaling up and down. This article explains the vertical scaling workflow: Configure vertical scaling In the Azure portal, go to your Azure Data Explorer cluster resource. Under Settings, select Scale up. In the Scale up window, you will see a list of available SKUs for your cluster. For example, in the following figure, only four SKUs are available. The SKUs are disabled because they're the current SKU, or they aren't available in the region where the cluster is located. To change your SKU, select a new SKU and click Select. Note - The vertical scaling process can take a few minutes, and during that time your cluster will be suspended. - Scaling down can harm your cluster performance. - The price is an estimate of the cluster's virtual machines and Azure Data Explorer service costs. Other costs are not included. See Azure Data Explorer cost estimator page for an estimate and the Azure Data Explorer pricing page for full pricing information. You've now configured vertical scaling for your Azure Data Explorer cluster. Add another rule for a horizontal scaling. If you need assistance with cluster-scaling issues, open a support request in the Azure portal. Next steps Manage cluster horizontal scaling to dynamically scale out the instance count based on metrics that you specify. Monitor your resource usage by following this article: Monitor Azure Data Explorer performance, health, and usage with metrics.
https://docs.azure.cn/en-us/data-explorer/manage-cluster-vertical-scaling
2022-05-16T16:00:30
CC-MAIN-2022-21
1652662510138.6
[]
docs.azure.cn
My Calendar Pro produces a simple report showing sales, dates, and how many times a given key has been used. As a key is used, the remaining uses will be shown. You can edit any payment to grant additional event licenses or adjust information about the payment: Quantity: The number of event submissions currently available on the event key. Submissions Purchased: The total number of submissions allowed on this key. You cannot change the event submissions key once it has been granted. Search Payments Name/Email: Name and email as provided. Transaction ID: The transaction ID provided from the payment gateway. (Not shown.) Payment Key: Their event submission key. After/Before: Get all payments within a date range. Status: Get payments having a specific status. Search results Earnings Summary My Calendar Pro provides a brief summary of the current month, current year, previous month, previous year, and all time sales totals.
https://docs.joedolson.com/my-calendar-pro/category/payments/
2022-05-16T15:29:24
CC-MAIN-2022-21
1652662510138.6
[]
docs.joedolson.com
Booking Panel The booking panel tool allows you to obtain a list of all your agencies bookings and cancellations and get more detail information about it. To obtain a list with base information you can use the Booking List panel. To check a specific booking and obtain all its details you can use the Booking Read panel. To see and download a list of detailed bookings you can use the Booking Reports panel (this funcionality especially used for our DMC clients). Booking List Booking list allows you to filter by different conditions: - Dates: - Booking dates: The date range when the booking was confirmed by the agency. - Check-in date: The date range when the booking check-in is included. - Transaction type/status: - Include cancellations: List with both effective bookings and cancelled ones. - Only cancellations: List with only cancelled bookings. - Only errors: List with bookings that failed and couldn’t be confirmed successfully. - Hotel: - Name: Filter bookings by hotel name. - Code: Filter bookings by hotel code. - Agency: Filter bookings by agency/client. - Provider: In case you work with different suppliers, you can filter by provider name. Booking Read In order to obtain more details and information of a specific booking you can use the Booking read panel on the left with one of the locators (Client, Provider or TGX). For each booking you will find the following information: - Locators: All booking locators. - Context: - Status: Booking status (success, cancelled, error) - Booking date - Agency - Supplier - Access - Configuration - * Hotel: Hotel code and name - * Check-in date - Mealplan - Market - Nationality - Price & Conditions - Payment type - Cancellation price - Selling price - Purchasing price - Currency exchange - Quote selling price - Quote purchasing price - Quote currency exchange - Quote selling cancel penalties - Quote purchasing cancel penalties - Type - Final Markup - Tax - Breakdown - Base Markup - Base Rappel - Selling pricing rules: Total - Rooms - Main Guest Name - Room: Room name, code, and number of pax Booking List Reports The booking reports tool allows you to obtain a file with all the bookings that have been done inside the parameters reports specified. The search parameters will be same as can be seen in the Booking Read. Notice that a Report name it is asked to you (with alfanumeric format), however, to ensure the uniqueness of the name, a datetime format will be added at the beginning of the name. The display of the reports created can be shown in the List of Reports. There are two importants thing to know: Only one report can be generated every 15 minutes. So, once a generation is asked, the user has to wait 15 minutes to ask for the other one. The generation of the report is not instant but it might take some time, depending on the volume of bookings to return. If Generation Status is equal to “Executing” the file is not done yet. So the download will not be able until the file is finished - When the status changes to “Finished ok”, the download button will be available.
https://docs.travelgatex.com/distribution/extranet/booking-search/
2022-05-16T15:18:04
CC-MAIN-2022-21
1652662510138.6
[]
docs.travelgatex.com
Select your preferred scripting language. All code snippets will be displayed in this language. class. #pragma strict public class OverriddenNetworkDiscovery extends NetworkDiscovery { public override function OnReceivedBroadcast(fromAddress: String, data: String) { NetworkManager.singleton.networkAddress = fromAddress; NetworkManager.singleton.StartClient(); } }:
https://docs.unity3d.com/2017.1/Documentation/ScriptReference/Networking.NetworkDiscovery.html
2022-05-16T16:40:19
CC-MAIN-2022-21
1652662510138.6
[]
docs.unity3d.com
Sublayouts Reusable sublayouts We often want to use the same layout snippets in different views. For this purpose VM3 has sublayouts. Sublayouts are very similar to the Minilayouts of Joomla, but are more in flow oriented style for easy use. The sublayouts are stored in the FE folder /component/com_virtuemart/sublayouts. You can add your own sublayouts into the core folder, or add/override them via template using /templates/yourtemplate/html/com_virtuemart/sublayouts The first parameter is just the name of the layout=file to be called. The static call is just echo shopFunctionsF::renderVmSubLayout('prices',array('product'=>$this->product,'currency'=>$this->currency)); The associative array is then available as $viewData. or within a VmView echo $this->renderVmSubLayout($this->productsLayout,array('products'=>$this->products,'currency'=>$this->currency,'products_per_row'=>$products_per_row,'showRating'=>$this->showRating)); The associative array is added to the context and useable for example as $this->products Sublayouts can also be used to create your own userfields. The new TOS userfields is an example for it. Just select the userfield type custom. The name of the userfield is the used name of the sublayout. Take a look on the tos.php in /components/com_virtuemart/sublayouts. Which layouts do appear in the configuration dropdowns? The normal layouts do not appear in the dropdowns as option, if there is an underscore _ in the name. This is to prevent users to select only reusable parts of a layout. Despite to the normal layouts, any sublayout must have a unique name, because there is no related view for it. They are all stored in one folder. Therefore the different sublayouts of the same type or group are with underscore _ . So if you want to create a new sublayout for products and it shoud appear as choice in the vm config, the prefix must be "products_". See as example products_horizon.php.
http://docs.virtuemart.com/tutorials/templating-layouts/199-sublayouts.html
2022-05-16T16:15:15
CC-MAIN-2022-21
1652662510138.6
[]
docs.virtuemart.com
scikit-learn model deployment on SageMaker This notebook uses ElasticNet models trained on the diabetes dataset described in Train a scikit-learn model and save in scikit-learn format. The notebook shows how to: Select a model to deploy using the MLflow experiment UI Deploy the model to SageMaker using the MLflow API Query the deployed model using the sagemaker-runtimeAPI Repeat the deployment and query process for another model Delete the deployment using the MLflow API For information on how to configure AWS authentication so that you can deploy MLflow models in AWS SageMaker from Databricks, see Set up AWS authentication for SageMaker deployment.
https://docs.databricks.com/applications/mlflow/scikit-learn-model-deployment-on-sagemaker.html
2022-05-16T15:01:18
CC-MAIN-2022-21
1652662510138.6
[]
docs.databricks.com
A newer version of this page is available. Switch to the current version. NavBarGroupDataFields Class Contains properties allowing you to specify data fields from which group settings of a bound ASPxNavBar should be obtained. Namespace: DevExpress.Web Assembly: DevExpress.Web.v20.2.dll Declaration Related API Members The following members accept/return NavBarGroupDataFields objects: Remarks An object of the NavBarGroupDataFields type can be accessed by the ASPxNavBar.GroupDataFields property of a navbar control.
https://docs.devexpress.com/AspNet/DevExpress.Web.NavBarGroupDataFields?v=20.2
2022-05-16T15:19:03
CC-MAIN-2022-21
1652662510138.6
[]
docs.devexpress.com
Django 2.2.13 release notes¶ June 3, 2020 Django 2.2.13 fixes two security issues and a regression in 2.2.12. CVE-2020-13254: Potential data leakage via malformed memcached keys¶¶ Query parameters for the admin ForeignKeyRawIdWidget were not properly URL encoded, posing an XSS attack vector. ForeignKeyRawIdWidget now ensures query parameters are correctly URL encoded. Bugfixes¶ - Fixed a regression in Django 2.2.12 that affected translation loading for apps providing translations for territorial language variants as well as a generic language, where the project has different plural equations for the language (#31570). - Tracking a jQuery security release, upgraded the version of jQuery used by the admin from 3.3.1 to 3.5.1.
https://docs.djangoproject.com/en/dev/releases/2.2.13/
2022-05-16T16:25:50
CC-MAIN-2022-21
1652662510138.6
[]
docs.djangoproject.com
The REM HSS Transparent Data Editor lets you use a web browser to view and edit HSS transparent user data. - How it works - Prerequisites - Using the Data Editor - Required configuration for MMTel - Active flags information - Required configuration for Operator Determined Barring - Required configuration for Metaswitch TAS Services How it works The web interface communicates with a web application hosted by Rhino Element Manager, which communicates with the HSS through the Sh Cache Microservice. Administrators can use it to: view provisioned data configure important settings add new data to the HSS remove data from the HSS. Prerequisites Before using the editor, you need to configure: an appropriate HSS configuration for the network operator required settings for MMTel services. For more information on Operator Determined Barring see Operator Determined Barring. Using the Data Editor To use the Data Editor: Edit transparent data for an IMS public identity Required configuration for MMTel To edit HSS transparent user data so it can use OpenCloud’s out-of-the-box IR.92 features, follow the manual configuration steps below. Active flags information There is special behaviour for each service’s "active" attribute. This is described in the Active attributes portion of the architecture document. The user interface has three possible values for each services active flag: Unsetmeans that the XML document does not have an active attribute for this service Truemeans that the attribute exists and has the value true, Falsemeans that the attribute exists and has the value false Required configuration for Operator Determined Barring To edit HSS transparent user data so it can use OpenCloud’s out-of-the-box IR.92 features: Required configuration for Metaswitch TAS Services To edit HSS transparent user data so it can use Metaswitch TAS Services custom document
https://docs.rhino.metaswitch.com/ocdoc/books/sentinel-volte-documentation/3.1.0/sentinel-volte-administration-guide/sentinel-volte-and-data/rem-hss-transparent-data-editor.html
2022-05-16T16:19:58
CC-MAIN-2022-21
1652662510138.6
[]
docs.rhino.metaswitch.com
Password Grant¶¶ - A valid user account in the API Developer Portal. You can self sign up if it is enabled by an admin . - A valid consumer key and consumer secret pair. Initially, these keys must be generated through the API Developer Portal by clicking GENERATE KEYS on the Production Keys tab of the application. A running API Gateway instance (typically an API Manager instance should be running). For instructions on API Gateway, see Components. If the Key Manager is on a different server than the API Gateway, change the server URL (host and ports) of the Key Manager accordingly by adding following configuration in <AM_HOME>/repository/conf/deployment.tomlfile. - If you have multiple Carbon servers running on the same computer, change the port with an offset to avoid port conflicts.- If you have multiple Carbon servers running on the same computer, change the port with an offset to avoid port conflicts. [apim.key_manager] configuration.ServerURL = "<key-manager-server-url>" Invoking the Token API to generate tokens¶ And here's the string encoded from the example: The encoded string should be used in the header of the cURL command. Access the Token API by using a REST client such as cURL, with the following parameters. - Assuming that both the client and the API Gateway are running on same server, the token API url is - payload - "grant_type=password&username=<username>&password=<password>&scope=<scope>". Replace the <username>and <password>values as appropriate. Tip <scope>is optional.. Headers - Replace the <base64encode(clientId:clientSecret)>as appropriate. For example, use the following cURL command to access the Token API. It generates two tokens as an access token and a refresh token. You can use the refresh token at the time a token is renewed . curl -k -d "grant_type=password&username=<username>&password=<password>" -H "Authorization: Basic" -H "Content-Type: application/x-www-form-urlencoded" You receive a response similar to the following: Response: { "scope":"default", "token_type":"Bearer", "expires_in":3600, "refresh_token":"ca5a51f18b2edf4eaa9e4b871e42b58a", "access_token":"f2c66f146278aaaf6513b585b5b68d1d" } Instead of using the Token API, you can generate access tokens from the API Developer Portal's UI. Note Note that for users to be counted in the Registered Users for Application statistics which takes the number of users shared each of the Application, they should have to generate access tokens using Password Grant type.
https://apim.docs.wso2.com/en/3.0.0/learn/api-security/oauth2/grant-types/password-grant/
2020-07-02T17:45:52
CC-MAIN-2020-29
1593655879738.16
[]
apim.docs.wso2.com
Follow these steps to verify that the installation was successful: #csh #source /usr/pw/pronto/bin/.tmcsh pw system status # pw system status ------------------------ Servers/Daemon Processes ------------------------ services 4820 httpd 6152 jserver 10660 pronet_agent 4680 pronet_cntl 3968 tunnelproxy 7792 rate 3976 dbsrv 6560 mcell 4952 acell 4916 pserver 4217 Review the output for errors. Processes that are not running properly are denoted by a numbered error message, such as 961 > max 1. Processes that are not running at all are denoted by !Not Running!. pw system start
https://docs.bmc.com/docs/display/TSOMD107/Verifying+the+Infrastructure+Management+installation
2020-07-02T19:42:49
CC-MAIN-2020-29
1593655879738.16
[]
docs.bmc.com
Q1. How much time will it take to provision my account ? Ans : Free and shared accounts should be instantly activated once the payment is made. For the dedicated accounts, you will be instantly activated and will be able to send out campaigns, but it may take an hour or so to get a dedicated IP assigned to you. Also it depends on the number of smtp servers/IPs you opt for. Q2. Do we need to enter credit card details for free account ? Ans: No. There is a forever free account that you can use without tagging a credit card. Q3. Is it possible to upgrade the packages on the go ? Ans : Yes Q4. What are the upcoming features ? Ans: We are working to incorporate multiple spam analyser tools and 70+ email preview simulators soon. We hope to complete it in 2 months and it will be absolutely free of charge. And there’s a few more in pipeline. Q5. Will we get any promotional offers ? Ans: Yes. We do distribute promotional coupons at random, mostly on festive seasons or as signup offers. Q6. Is it possible to use my own domain as sending and signing (dkim) domains ? Ans: Yes. Q7. Is there any customer contracts or service agreements ? Ans : No. You can deactivate the account at anytime. All your data including subscriber lists and campaigns will be deleted after 60 days of inactivity (means if not renewed). That said, we do have strict policies in place when you’re using our system. Please take a look here. Q8. Do you offer recurring billing so that I don’t need to type my credit card details every time I renew the packages ? Ans : Currently no. This is because the RBI (reserve Bank of India) norms wont allow any company registered in India to debit money from customers account without their action. We’re now in the process of incorporating company in the U.S. Once that is done, recurring subscriptions will be in place. Q9. Do you offer offline payments ? Ans: Yes. You can bank transfer the amount to us. Before that, you can make an offline order through the customer dashboard. Do contact us once the transfer is done so that we can activate your account. Q10. Do you have working feed back loop (FBL) processors in place to reduce user complaints ? Ans: Yes. We sign up at almost all the major ISPs to get the fbl data and unsubscribe the problem making subscribers on an ongoing basis. Still can’t find an answer you’re looking for ? Chat with us or contact us via the contact form available on our website.
https://docs.mailfed.com/faq/
2020-07-02T18:47:20
CC-MAIN-2020-29
1593655879738.16
[]
docs.mailfed.com
Little's Law and How It Impacts Load Testers Load testing is all about queuing, and servicing the queues. The main goals in our tests are parts of the formula itself. For Example: the response times for a test is equivalent to service times of a queue, load balancing with multiple servers is the same as queue concurrency. Even when we look at how we are designing a test we see relationships to this simple theory. I will walk you through a few examples and hopefully open up this idea for you to use in amazing ways. How to determine concurrency during a load test? (Customer Question: What was the average number of concurrent requests during the test? What about for each server?) Little's Law -> Adjusted to fit load testing terminology L = λW L is the average concurrency λ is the request rate W is the Average Response Time For Example: We have a test that ran for 1 hour, the average response times were 1.2 seconds. The average number of requests sent to the server by the test per second was 16. L = 16 * 1.2 L = 19.2 Average number of concurrent request processing on the servers was 19.2 averaged per second. Let’s say we have perfect load balancing, of 5 servers. M = Number of Servers/Nodes L = Concurrent requests per Server/Node . L= 19.2/5 So, each server is processing ~3.84 concurrent requests per second on average This explains why, when we reach our maximum concurrency on a single process, response times begin to grow and throughput levels off. Thus, we need to optimize the service times, or increase the concurrency of processing. This can be achieved by either scaling out or up depending on how the application is architected. Also, this is not all entirely true, there will be time spent in different parts of the system, some of which have little to no concurrency, two bits cannot exist concurrently on the same wire electrically, as of writing this article. Can we achieve the scenario objectives given a known average response time hypothesis? For tests that use “per user per hour pacing”, we can determine the minimum number of users necessary to achieve the target load given a known average response time under light load. You can obtain the average response times from previous tests or you can run a smoke test with light load to get it. 3600 seconds in an hour W = Average Response Times in seconds for the test case you are calculating Number of users needed U = 3600/W G = Throughput Goal (Transactions per hour) Let’s say we have a average response time of 3.6 seconds, and our goal is to run this test case 2600 times per hour. What would be the minimum number of users to achieve this? U = ceil(2600/(3600/3.6)) U=3 Pacing = G / U Pacing = 866 So, the scenario would be 3 users and the test case would be set to execute 866 times per user per hour. Personally, I like to add 20% additional users to account for response time growth just in case the system slows down. For example: I would run with 4 vUsers at 650 Tests per User per Hour, or 5 vUsers at 520 Tests per User per Hour. There are endless possibilities to using this formula to understand and predict behavior of the system, post some examples of how you would use queueing theory in the comment section below. Have a great day and happy testing!
https://docs.microsoft.com/en-us/archive/blogs/chwitt/littles-law-and-how-it-impacts-load-testers
2020-07-02T20:04:51
CC-MAIN-2020-29
1593655879738.16
[array(['https://msdnshared.blob.core.windows.net/media/2017/10/leq.png', 'U = Ceil(G/(3600/W))'], dtype=object) ]
docs.microsoft.com
Engaging a new Throttling Policy at Runtime¶ WSO2 API Manager allows you to control the number of successful requests to your API during a given period. You can enable for APis in the CREATED and PUBLISHED state and also for published APIs at runtime. This feature protects your APIs, regulates traffic and access to the resources.> Info Attributes - Throttle policy - This section is used to specify the policy for throttling. - Maximum concurrent accesses - The maximum number of messages that are served at a given time. - Throttle assertion - Assertion for a concurrency-based policy. Log in to the API Manager's management console () and go to the Resource > Browse menu to view the registry. Click the /_system/goverence/apimgt/applicationdatapath to go to its detailed view. In the detail view, click Add Resource . Upload the policy file to the server as a registry resource. Open the synapse configuration file of a selected API you want to engage the policy, from the <API-M_HOME>/repository/deployment/server/synapse-configs/default/apidirectory. To engage the policy to a selected API, add it to your API definition. In this example, we add it to the login API under APIThrottleHandler. .Top
https://apim.docs.wso2.com/en/3.0.0/learn/rate-limiting/engaging-a-new-throttling-policy-at-runtime/
2020-07-02T19:15:17
CC-MAIN-2020-29
1593655879738.16
[]
apim.docs.wso2.com
Accessing API Manager by Multiple Devices Simultaneously¶ When there are many users who use production deployment setups, accessing API Manager by multiple devices is more important. According to the architecture, if we logged out from one device and revoke the access token, then all the calls made with that token thereafter will get authentication failures. In this case Applications should be smart enough to detect that authentication failure and should request for a new access token. Note This will be a guide for you if you create client applications having API Manager underying. And note that, you need to use Password Grant type in this scenario Issue in having multiple access tokens Once user log in to the application, the user may need to provide username and password. We can use that information (username and passowrd) and consumer key, consumer secret pair to receive new token once the authentication failure is detected. We should handle this from client application side. If we allowed users to have multiple tokens at the same time, that will cause security related issues and finally users will end up with thousands of tokens that the user cannot even maintain. And also those affects to the usage of metering and statistics. Recommended Solution¶ The recommended solution for this issue is having only one active user token at a given time. We need to make the client application aware about error responses sent from the API Manager Gateway. And use the refresh token Approach. When you request a user token you will get refresh token along with the token response, so that you can use that for refreshing the access token. How this should work¶ Let's assume that same user is logged in to WSO2 API Manager from desktop and tablet. At that time client should provide username and password both when they are login into desktop and tablet apps. Then you can generate token request with the username - password and consumer key - consumer secret pair. So this request will be kept in memory until the user close or logout from application (We do not persist this data to anywhere so that there is no security issue.). Then, when the user log out from the desktop or the application on the desktop, it decides to refresh the OAuth token first, and the user will be prompted to enter their username and password on the tablet, since the tablet has revoked or inactivated the OAuth token. But in here, you should not prompt username and password because the client is already provided them and you have the token request in memory. Once we detect authentication failure from the tablet, it will immediately send token generation request and get new token. Hence, the user will not aware about what happen underline.Top
https://apim.docs.wso2.com/en/3.0.0/reference/guides/accessing-api-manager-by-multiple-devices-simultaneously/
2020-07-02T19:02:45
CC-MAIN-2020-29
1593655879738.16
[]
apim.docs.wso2.com
The GAVO VOTable Library¶ A library to process VOTables using python¶ Introduction¶ This library lets you parse and generate VOTables. These are data containers as used in the Virtual Observatory and specified in This library supports Version 1.3 VOTables (including BINARY3) and to some extent earlier versions as well. There are many other libraries to parse VOTables into python programs, the default probably being astropy’s. This one is geared for maximal control, streaming, and the possibility to write fairly mad instance documents. There’s a simplified high-level interface for simple applications, too. Obtaining the library¶ Current versions of the library are available from DaCHS distribution page. Users of Debian stable (and similar distributions not too far removed), will want to pull the library from the data center’s apt archive; see the DaCHS install docs. This library is part of the GAVO Data Center Helper Suite. Simple parsing¶ To parse simple (one-table), moderately sized VOTable, do: votable.load(source) -> data, metadata Here, data is a list of records, each of which is a sequence, metadata is a TableMetadata instance, and source can be a string naming a local file, or it can be a file-like object. To parse material in a string, use votable.loads. votable.load only looks at the first TABLE element encountered. If the VOTable does not contain any tabular data at all (e.g., error messages from various VO protocols), (None, None) is returned. Metadata contains is the VOTable.TABLE element the data comes from in its votTable atttribute. By iterating over the metadata you get the field objects. For example: from gavo import votable labels = [field.name for field in metadata] print [dict(zip(labels, row)) for row in data] There’s a convenience method that does the dictification for you; to iterate over all rows of a VOTable as dicts, you can say: data, metadata = votable.load(source) for row in metadata.iterDicts(data): ... If you want to create a numpy record array from that data, you can say: data, metadata = votable.load(source) ra = rec.array(data, dtype=votable.makeDtype(metadata)) However, you cannot in general store NULL values in record arrays (as None, that is), so this code will fail for many tables unless one introduces proper null values (e.g., nan for floats; for ints, you could ask metadata for the null value used by the VOTable). Also, record arrays cannot store variable-length strings, so makeDtype assumes some default length. Pass a defaultStringLength keyword argument if your strings get truncated (or replace the * in the FIELD object with whatever actual length you deem sufficient). All the above examples will fail on VOTables with features they cannot understand. Current VO practices recommend ignoring unknown tags and attributes, though. The VOTable library has a brute force approach so far; pass raiseOnInvalid=False to the parsing functions and they will ignore constructs the do not understand. Note that this will lead to surprising behaviour in cases where you input non-VOTables to your program. As long as it is well-formed XML, you will not receive error messages. Simple writing¶ You can do some manipulations on data and metadata as returned by votable.load (in lockstep) and write back the result using votable.save(data, metadata, destF), where destF is a writable file object. This could look like this: >>> data, metadata = votable.load("in.vot") >>> tableDef = metadata.votTable.deepcopy() >>> tableDef[V.FIELD(name="sum", datatype="double")] >>> newData = [row+(row[1]+row[2],) for row in data] >>> with open("res.vot", "w") as f: >>> votable.save(newData, tableDef, f) Manipulating the table definitions is not convenient in this library so far. If you need such functionality, we would probably provide funtions to manipulation fields and params (or maybe just expose the child lists). Iterative parsing¶ Iterative parsing is well suited for streaming applications and is attractive because the full document need never be completely in RAM at any time. For many applications, it is a little more clunky to work with, though. The central function for iterative parsing is parse: parse(inFile, watchset=[]) -> iterator There also is parseString that takes a string literal instead of a file. inFile may be anything acceptable to the Elementtree library, so files and file names are ok. The watchlist gives additional element types you want the iterator to return. These have to be classes from votable.V (see VOTable elements) By default, only special votable.Rows objects are returned. More on those below. You get back stanxml elements with an additional attribute idmap is a dictionary-like object mapping ids to the elements that have so far been seen. This is the same object for all items returned, so forward references will eventually be resolved in this idmap if the input document is valid. For documents with clashing ids, the behaviour is undefined. So, if you were interested in the resource structure of a VOTable, you could write: >>> from gavo.votable import V >>> >>> for element in votable.parse(open("in.vot"), watchset=[V.RESOURCE]): >>> if isinstance(element, V.RESOURCE): >>> print "Starting new resource: %s"%element.name >>> else: >>> print "Discarding %s elements"%len(list(element)) Rows objects¶ Unless you order something else, parse will yield Rows objects only, one per TABLE element. Iterate over those to obtain the table rows, deserialized, with None as NULL. Note that you must exhaust the Rows iterator if you wish to continue parsing the table. You currently cannot simply skip a table. The Rows’ tableDefinition attribute contains the VOTable TABLE element that describes the current data. To read the first table, do: rows = votable.parse(open(inFileName)).next() print rows.tableDefinition.iterChildrenOfType(votable.V.FIELD) for row in rows: print row The tableDefinition also lets you retrieve FIELDs by name using the getFieldForName(name) -> FIELD method. stanxml elements¶ VOTables are built as stanxml elements; these have the common VOTable attributes as python attributes, i.e., to find the ucd of a FIELD f you would say f.ucd. To find out what’s where, read the VOTable spec or check the VOTable elements and look out for class attributes starting with _a_ – these are turned into attributes without the prefix. To access child elements, use any of the following methods: iterChildren()– yields all the element’s children in sequence iterChildrenOfType(stanxmlType)– yields all the element’s children that are an instance of stanxmlType(in a python inheritance sense) makeChildDict()– returns a dictionary that maps child element names to sequences of children of this type. So, to find a FIELD’s description(s), you could say either: list(f.iterChildrenOfType(V.DESCRIPTION)) or: f.makeChildDict()["DESCRIPTION"] The text content of a stanxml node is available in the text_ attribute. Post-data INFOs¶ More recent VOTable specifications allow INFO elements after table data. If you must catch these, things get a bit messier. To be sure tableDefinition is complete including post-data groups, you need to let the iterator run once more after exhausting the data. Here’s how to do this for the first table within a VOTable: td = None for rows in votable.parse(inFile): if td is not None: break td = rows.tableDefinition for row in rows: doMagic(row) When you don’t care about possible INFO elements anyway, use the simpler pattern above. Generating VOTables¶ Low Level¶ When creating a VOTable using the low-level interface, you write the VOTable using DOM elements, using a simple notation gleaned from Nevow Stan. This looks like this: vot = V.VOTABLE[ V.INFO(name="generic", value="example)["This is an example"], V.RESOURCE[ votable.DelayedTable( V.TABLE[ V.FIELD(name="col1", datatype="char", arraysize="*"),], rows, V.BINARY)]] – square brackets thus denote element membership, round parentheses are used to set attributes. The votable.DelayedTable class wraps a defined table and serializes data ( rows in the example) using the structure defined by the first argument into a serialization defined by its last argument. Currently, you can use V.BINARY or V.TABLEDATA here. The data itself must come in sequences of python values compatible with your field definitions. To write the actual VOTable, use the votable.write(root, outputFile) method. You can also call the root element’s render() method to obtain the representation in a string. High Level¶ There is a higher-level API to this library as a part of the DaCHS software. However, absent the mechanisms present there it’s not trivial to come up with an interface that is both sufficiently expressive and simpler than just writing down stuff as in the low level API. Numpy arrays we’ll do at some point, and it helps if you ask us. Special behaviour¶ (a.k.a. “Bug and Features”) From the standards document it is not clear if, on parsing, nullvalue comparison should happen on literals or on parsed values. In this library, we went for literal comparison. This means that, e.g., for unsignedBytes with a null value of 0x10, a decimal 16 will not be rendered as None. Values of VOTable type bits are always returned as integers, possibly very long ones. Arraysize specifications are ignored when parsing VOTables in TABLEDATA encoding. The resulting lists will have the length given by the input. When writing, arraysizes are mostly enforced by clipping or padding with null values. They currently are not for strings and bit arrays. One consequence of this is that with arraysize="*", a NULL array will be an empty tag in TABLEDATA, but with arraysize='n' it will be n nullvalues. Bit arrays in TABLEDATA encoding may have interspersed whitespace or not. When encoding, no whitespace is generated since this seems the intention of the spec. All VOTables generated by this library are in UTF-8. unicodeChar in BINARY encodes to and from UTF-16 rather than UCS-2 since UCS-2 is deprecated (and actually unsupported by python’s codecs). However, this will fail for fixed-size strings containing characters outside of the BMP since it is impossible to know how many bytes an unknown string will occupy in UTF-16. So, characters for which UCS-2 and UTF-16 are different will fail. These probably are rare, but we should figure out some way to handle this. This will only make a difference for characters outside of the Basic Multilingual Plane. Hope you’ll never encounter any. Nullvalue declarations for booleans are always ignored. Nullvalue declarations for floats, doubles, and their complex counterparts are ignored when writing (i.e., we will always use NaN as a nullvalue; anything else would be highly doubtful anyway since float coming from representations in binary and decimal are tricky to compare at best). When serializing bit fields in BINARY and there are too many bits for the number of bytes available, the most significant bits are cut off. If there are too few, zeroes are added on the left. Post-data INFOs are not currently accessible when doing iterative parsing. In BINARY serialization, fixed-length strings (both char and unicodeChar) are always padded right with blanks, whether or not a nullvalue is defined. For char and unicodeChar arrays, nullvalues are supposed to refer to the entire array value. This is done since probably no library will support individual NULL characters (whatever that is) within strings, and this if we encounter such a thing, this probably is the meaning. Don’t write something like that, though. When deserializing variable multidimensional arrays from BINARY encoded streams, the length is assumed to be the total number of elements in the array rather than the number of rows. This may change when I find some VOTable using this in the wild. Multidimensional arrays are returned as a single sequence on parsing, i.e. an arraysize of 5x7 is interpreted exactly like 35. This is not going to change. If you must, you can use the unravel|Array(arraysize, seq) function to reshape the list and get a nested structure of lists, where arraysize has the form of the VOTable FIELD attribute. If seq does not match the dimensions described by arraysize, the behavior is undefined (right now, we return short rows, but we may later raise exceptions). On writing, you must flatten your multidimensional arrays before passing them to the library. This may change if people actually use it. The behavior then will be to accept as input whatever unravelArray returns. You can guess that the author considers multidimensional arrays a particularly gross misfeature within the misfeature VOTable arrays. The ref attribute on TABLEs currently is not interpreted. Due to the way the library works, this means that such tables cannot be parsed. The STC Data Model¶ To include STC information, you can just build the necessary GROUPs, PARAMs and FIELDrefs yourself. Alternatively, you can install GAVO’s STC library and build ASTs in some way (most likely from STC-S) and use the modelgroups module to include the information. This would look like this: from votable import modelgroups, DelayedTable [...] ast = ast.parseQSTCS('Time TT "date" Position ICRS "ra" "de") fields = [ V.FIELD(name="date", datatype="char", arraysize="*"), V.FIELD(name="ra", datatype="float"), V.FIELD(name="de", datatype="float"),] table = V.TABLE[fields, modelgroups.marshal(ast, getIDFor)] XXX TODO: Add id management.
https://dachs-doc.readthedocs.io/votable.html
2020-07-02T17:53:20
CC-MAIN-2020-29
1593655879738.16
[]
dachs-doc.readthedocs.io
Creating and managing vendors Track-It! allows you to manage vendor information that you can use for the asset management. You can manage vendor information such as company, payment terms, and contact details. Creating a vendor - On the header bar, expand the hamburger menuand select Configuration. - Click Application Settings and select Vendors. - On the Vendors page, click New. In the New Vendor dialog box, perform the following steps: In the Company Name field, enter the company name for the new vendor. - Enter an appropriate value for the other fields. (Optional) In the Comments field, enter additional details about the vendor. - (Optional) To make this vendor unavailable, select the Mark as Inactive check box. - Click Save. - To add more vendors, repeat steps 4 and 5. - (Optional) To edit a vendor, click. - (Optional) To delete a vendor, click. Related topics Setting up BMC Client Management Configuring asset management Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/trackit2018/en/creating-and-managing-vendors-777225675.html
2020-07-02T19:20:17
CC-MAIN-2020-29
1593655879738.16
[]
docs.bmc.com
Energy in the UK Energy in the UK 201986
https://www.docs.energy-uk.org.uk/energy-industry/6872-energy-in-the-uk-2.html
2020-07-02T18:36:30
CC-MAIN-2020-29
1593655879738.16
[array(['/images/energy_industry_img/2017/rsz_eituk_2017.png', None], dtype=object) ]
www.docs.energy-uk.org.uk
Custom Reduce Functions The reduce()function has to work slightly differently to the map()function. In the primary form, a reduce() function must convert the data supplied to it from the corresponding map() function. The core structure of the reduce function execution is shown the figure below. The base format of the reduce() function is as follows: function(key, values, rereduce) { … return retval; } The reduce function is supplied three arguments: key The key is the unique key derived from the map() function and the group_level parameter. values The values argument is an array of all of the values that match a particular key. For example, if the same key is output three times, data will be an array of three items containing, with each item containing the value output by the emit() function. rereduce The rereduce indicates whether the function is being called as part of a re-reduce, that is, the reduce function being called again to further reduce the input data. When rereduce is false: * The supplied `key` argument will be an array where the first argument is the `key` as emitted by the map function, and the `id` is the document ID that generated the key. * The values is an array of values where each element of the array matches the corresponding element within the array of `keys`. When rereduce is true: * `key` will be null. * `values` will be an array of values as returned by a previous `reduce()` function. The function should return the reduced version of the information by calling the return() function. The format of the return value should match the format required for the specified key.
https://docs.couchbase.com/server/5.0/views/views-writing-custom-reduce.html
2020-07-02T19:29:21
CC-MAIN-2020-29
1593655879738.16
[array(['_images/custom-reduce.png', 'custom reduce'], dtype=object)]
docs.couchbase.com
MSN APIs - more details MSN's Dare Obasanjo has collated more info on the MSN APIs (to be officially announced next week). " On the subject of MSN APIs and Virtual Earth hacks, the father of Mr Scoble Jnr has posted a second new Channel 9 video interview with the MSN Vistual Earth team. Also worth checking out is the Virtual Earth team blog.
https://docs.microsoft.com/en-us/archive/blogs/alexbarn/msn-apis-more-details
2020-07-02T20:26:25
CC-MAIN-2020-29
1593655879738.16
[]
docs.microsoft.com
Moving Alerts to the Historic Database The Alert Analyzer calculates information entropy values using event data for alerts that have been closed and moved into the historic database. It needs about two weeks of historic data to analyze patterns in your event descriptions and calculate an entropy model for applying to live data. Your lab instance contains two weeks' worth of open alerts and their underlying events. Simulate the passage of time by closing them and moving them to the historic database so that they are available for calculating entropy. Close all of the open alerts and Situations in the user interface (UI). Run the MoogDB split configurer utility to move the closed alerts to the historic database. You can do this by accessing your instance with a terminal program and the sshcredentials you received. Alternatively, you can use a pre-configured ChatOps command, 'split_db'. Verify that the utility ran successfully by checking in the UI that no closed alerts remain. Navigate to Workbench>Open Alerts. Scroll to the last alert. You will use this data for entropy calculations. Because it is based on past data--about two week's worth--most of the alerts already have 'clear' status. Scan some of the descriptions. Which alerts seem most serious? Notice that the alerts come from several different managers, or monitoring systems. Once you have loaded all the alerts into the UI by scrolling to the bottom of the list, you should be able to select all of them at once by using the checkbox at the top left. Click the checkbox, right-click, select 'Close', and click 'OK' to close all of the alerts. Close all of the open Situations as well. There are no alerts visible in the default Open Alerts view, but the closed alerts are all still in the active database, so they are not yet available in the historic database for calculating entropy. When it is configured to do so, the Housekeeper Moolet will move closed alerts into the historic database on a regular schedule. This time though, move the data yourself using the moog_db_split_configurerutility. You could access this utility by logging in to your instance using your sshcredentials, but we have configured a ChatOps shortcut for this lab so you do not have to do so. Go to Workbench>Open Situations>Tools>Create Situation and click 'Done' to create a Situation manually. Go to the Collaborate tab for that Situation. Enter '@bot split_db' in the comment box. This will run the following command: split_time=$(date -d "+1 minute" +"%H:%M") && /usr/share/moogsoft/bin/utils/moog_db_split_configurer -g 0 -r $split_time The first part of the command, before the '&&', defines a time one minute ahead of the current time on your lab instance, and gives it an 'HH:MM' format. The second part of the command schedules the database split utility to run at that time. Wait until the utility has had time to run, and then go to Workbench>Open Alerts. Click on 'Status' in the filter bar and choose 'Closed' and 'Apply'. Verify that there are no closed alerts remaining in the active database. This concludes the lab section.
https://docs.moogsoft.com/Enterprise.8.0.0/moving-alerts-to-the-historic-database.html
2020-07-02T18:48:16
CC-MAIN-2020-29
1593655879738.16
[]
docs.moogsoft.com
Manila is the file share service project for OpenStack. Manila provides the management of file shares for example, NFS and CIFS as a core service to OpenStack. Manila works with a variety of proprietary backend storage arrays and appliances, with open source distributed filesystems, as well as with a base Linux NFS or Samba server. There are a number of concepts that will help in better understanding of the solutions provided by manila. One aspect can be to explore the different service possibilities provided by manila. Manila, depending on the driver, requires the user by default to create a share network using neutron-net-id and neutron-subnet-id (GlusterFS native driver does not require it). After creation of the share network, the user can proceed to create the shares. Users in manila can configure multiple back-ends just like Cinder. Manila has a share server assigned to every tenant. This is the solution for all back-ends except for GlusterFS. The customer in this scenario is prompted to create a share server using neutron net-id and subnet-id before even trying to create a share. The current low-level services available in manila are: Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/manila/stein/contributor/intro.html
2020-07-02T20:12:05
CC-MAIN-2020-29
1593655879738.16
[]
docs.openstack.org
[ English | 日本語 | Deutsch | Indonesia ] Advanced Configuration¶ OpenStack is intended to work well across a variety of installation flavors, from very small private clouds to large public clouds. To achieve this, the developers add configuration options to their code that allow the behavior of the various components to be tweaked depending on your needs. Unfortunately, it is not possible to cover all possible deployments with the default configuration values. At the time of writing, OpenStack has more than 3,000 configuration options. You can see them documented at the OpenStack Configuration Reference. This chapter cannot hope to document all of these, but we do try to introduce the important concepts so that you know where to go digging for more information. Differences Between Various Drivers¶ Many OpenStack projects implement a driver layer, and each of these drivers will implement its own configuration options. For example, in OpenStack Compute (nova), there are various hypervisor drivers implemented—libvirt, xenserver, hyper-v, and vmware, for example. Not all of these hypervisor drivers have the same features, and each has different tuning requirements. Note The currently implemented hypervisors are listed on the OpenStack Configuration Reference. You can see a matrix of the various features in OpenStack Compute (nova) hypervisor drivers at the Hypervisor support matrix page. The point we are trying to make here is that just because an option exists doesn’t mean that option is relevant to your driver choices. Normally, the documentation notes which drivers the configuration applies to. Implementing Periodic Tasks¶ Another common concept across various OpenStack projects is that of periodic tasks. Periodic tasks are much like cron jobs on traditional Unix systems, but they are run inside an OpenStack process. For example, when OpenStack Compute (nova) needs to work out what images it can remove from its local cache, it runs a periodic task to do this. Periodic tasks are important to understand because of limitations in the threading model that OpenStack uses. OpenStack uses cooperative threading in Python, which means that if something long and complicated is running, it will block other tasks inside that process from running unless it voluntarily yields execution to another cooperative thread. A tangible example of this is the nova-compute process. In order to manage the image cache with libvirt, nova-compute has a periodic process that scans the contents of the image cache. Part of this scan is calculating a checksum for each of the images and making sure that checksum matches what nova-compute expects it to be. However, images can be very large, and these checksums can take a long time to generate. At one point, before it was reported as a bug and fixed, nova-compute would block on this task and stop responding to RPC requests. This was visible to users as failure of operations such as spawning or deleting instances. The take away from this is if you observe an OpenStack process that appears to “stop” for a while and then continue to process normally, you should check that periodic tasks aren’t the problem. One way to do this is to disable the periodic tasks by setting their interval to zero. Additionally, you can configure how often these periodic tasks run—in some cases, it might make sense to run them at a different frequency from the default. The frequency is defined separately for each periodic task. Therefore, to disable every periodic task in OpenStack Compute (nova), you would need to set a number of configuration options to zero. The current list of configuration options you would need to set to zero are: bandwidth_poll_interval sync_power_state_interval heal_instance_info_cache_interval host_state_interval image_cache_manager_interval reclaim_instance_interval volume_usage_poll_interval shelved_poll_interval shelved_offload_time instance_delete_interval To set a configuration option to zero, include a line such as image_cache_manager_interval=0 in your nova.conf file. This list will change between releases, so please refer to your configuration guide for up-to-date information. Specific Configuration Topics¶ This section covers specific examples of configuration options you might consider tuning. It is by no means an exhaustive list. Security Configuration for Compute, Networking, and Storage¶ The OpenStack Security Guide provides a deep dive into securing an OpenStack cloud, including SSL/TLS, key management, PKI and certificate management, data transport and privacy concerns, and compliance. High Availability¶ The OpenStack High Availability Guide offers suggestions for elimination of a single point of failure that could cause system downtime. While it is not a completely prescriptive document, it offers methods and techniques for avoiding downtime and data loss. Enabling IPv6 Support¶ You can follow the progress being made on IPV6 support by watching the neutron IPv6 Subteam at work. By modifying your configuration setup, you can set up IPv6 when using nova-network for networking, and a tested setup is documented for FlatDHCP and a multi-host configuration. The key is to make nova-network think a radvd command ran successfully. The entire configuration is detailed in a Cybera blog post, “An IPv6 enabled cloud”. Geographical Considerations for Object Storage¶ Support for global clustering of object storage servers is available for all supported releases. You would implement these global clusters to ensure replication across geographic areas in case of a natural disaster and also to ensure that users can write or access their objects more quickly based on the closest data center. You configure a default region with one zone for each cluster, but be sure your network (WAN) can handle the additional request and response load between zones as you add more zones and build a ring that handles more zones. Refer to Geographically Distributed Swift Considerations in the documentation for additional information.
https://docs.openstack.org/operations-guide/ops-advanced-configuration.html
2020-07-02T19:23:32
CC-MAIN-2020-29
1593655879738.16
[]
docs.openstack.org
krb5.pac.validation This configuration parameter specifies whether or not to verify that the user's PAC (Privilege Authorization Certificate) information is from a trusted KDC (Key Distribution Center) so as to prevent what's referred to as a "silver ticket" attack. When performing credential verification, a service ticket is fetched for the local system. After the credential is verified, the local system uses the PAC information in the service ticket. This setting take effect when krb5.verify.credentials is enabled or when DirectControl is using the user's PAC from a service ticket. This setting does not apply to retrieving the PAC by way of the S4U2Self protocol. There are 3 possible values for krb.pac.validation: - disabled (default): NO PAC validation will be done at all. - enabled: If PAC Validation fails, the PAC information is used and the user login is allowed. - enforced: If PAC Validation fails, the PAC information is discarded and the user login is denied. Setting this parameter to enabled or enforced will have significant impact on the user login and user's group fetch performance. For example: krb5.pac.validation: disabled If this parameter is not defined in the configuration file, its default value is disabled.
https://docs.centrify.com/Content/config-unix/krb5_pac_validation.htm
2021-06-12T14:28:36
CC-MAIN-2021-25
1623487584018.1
[]
docs.centrify.com
Citrix Virtual Apps Essentials Citrix Virtual Apps Essentials allows you to deliver Windows applications and shared hosted desktops from Microsoft Azure to any user on any device. The service combines the industry-leading Citrix Virtual Apps service with the power and flexibility of Microsoft Azure. You can also use Virtual Apps Essentials to publish Windows Server desktops. Server OS machines run multiple sessions from a single machine to deliver multiple applications and desktops to multiple, simultaneously connected users. Each user requires a single session from which they can run all their hosted applications. The service is delivered through Citrix Cloud and helps you to deploy your application workloads within your Azure subscription with ease. When users open applications from the workspace experience, the application appears to run locally on the user computer. Users can access their apps securely from any device, anywhere. Virtual Apps Essentials includes the workspace experience and the Citrix Gateway service, in addition to its core management services. Your app workloads run in your Azure subscription. Deployment architectureDeployment architecture The following diagram shows an architectural overview of a basic Virtual Apps Essentials cloud deployment: You can also allow users to connect to your on-premises data center. Connections between the Azure cloud and your on-premises data center occur through a VPN connection. Users connect through Virtual Apps Essentials to your license server, file servers, or Active Directory over the VPN connection. Deployment summaryDeployment summary Follow these steps to deploy Citrix Virtual Apps Essentials: - Buy Citrix Virtual Apps Essentials from the Azure Marketplace. - Prepare and link your Azure subscription. - Create and upload your master image. - Deploy a catalog, publish apps and desktops, and assign subscribers Apps Essentials: XenApp is part of our workspace strategy, where many types of apps come together in the preferred place to access work tools. As part of a unified, contextual, secure workspace, XenApp Essentials is now Citrix Virtual Apps Essentials. Citrix Workspace app: The Citrix Workspace app incorporates existing Citrix Receiver technology as well as the other Citrix Workspace client technologies. It has been enhanced to deliver more capabilities to provide end users with a unified, contextual experience where they can interact with all the work apps, files, and devices they need to do their best work. Citrix Gateway: The NetScaler Unified Gateway, which allows secure, contextual access to the apps and data you need to do your best work, is now Citrix Gateway., other resources (such as videos and blog posts), and other sites (such as Azure Marketplace) might still contain former names. Your patience during this transition is appreciated. For more detail about our new names, see. May 2018: Building additional images from the Virtual Apps Essentials interface After creating a production image from the Azure Resource Manager interface, you can create additional images through Azure, as needed. Now, as an optional alternative to creating additional images through the Azure interface, you can build a new master image from the Virtual Apps Essentials interface. For details, see Prepare and upload a master image. May 2018: Monitor display enhancements The Monitor display now includes usage information about applications and top users. For details, see Monitor the service. System requirementsSystem requirements Microsoft Azure Citrix Virtual Apps Essentials supports configuring machines only through Azure Resource Manager. Use Azure Resource Manager to: - Deploy resources such as virtual machines (VMs), storage accounts, and a virtual network. - Create and manage the resource group (a container for resources that you want to manage as a group). To provision and deploy resources in Microsoft Azure, you need: - An Azure account. - An Azure Resource Manager subscription. - An Azure Active Directory global administrator account in the directory associated with your subscription. The user account must have Owner permission for the Azure subscription to use for provisioning resources. For more information about how to set up an Azure Active Directory tenant, see How to get an Azure Active Directory tenant. Citrix Cloud Virtual Apps Essentials is delivered through the Citrix Cloud and requires a Citrix Cloud account to complete the onboarding process. You can create a Citrix Cloud account on the Citrix Cloud Sign Up page before going to Azure Marketplace to complete the transaction. The Citrix Cloud account you use cannot be affiliated with an existing Citrix Virtual Apps and Desktops service or Citrix Virtual Desktops Essentials service account. Virtual Apps Essentials console You can open the Virtual Apps Essentials administration console in the following web browsers: - Google Chrome - Internet Explorer Known issuesKnown issues Virtual Apps Essentials has the following known issues: - On Windows Server 2019 VDAs, some application icons might not appear correctly during configuration and in the users’ workspace. As a workaround, after the app is published, use the Change icon feature to assign a different icon that displays correctly. - If you use Azure AD Domain Services: Workspace logon UPNs must contain the domain name that was specified when enabling Azure AD Domain Services. Logons cannot use UPNs for a custom domain you create, even if that custom domain is designated as primary. - When you configure users for a catalog and select a domain, you can see and choose the users from the Builtin\users group. - Creating the catalog fails if the virtual machine size is not available for the selected region. To check the virtual machines that are available in your area, see the chart at Products available by region on the Microsoft website. - You cannot create and publish multiple instances of the same app from the Start menu at the same time. For example, from the Start menu you publish Internet Explorer. Then, you want to publish a second instance of Internet Explorer that opens a specific website on startup. To do so, publish the second app by using the path for the app instead of the Start menu. - Virtual Apps Essentials supports linking a subscription by using an Azure Active Directory user account. Virtual Apps Essentials does not support Live.com authenticated accounts. - Users cannot start an application if there is an existing Remote Desktop Protocol (RDP) session on the VDA. This behavior only happens if the RDP session starts when no other users are logged on to the VDA. - You cannot enter a license server address longer than server.domain.subdomain. - If you perform multiple sequential updates to capacity management, there is a possibility that the updated settings do not properly propagate to the VDAs. - If you use a non-English web browser, the text appears as a combination of English and the browser language. How to buy the serviceHow to buy the service Note: The information in this section is also available as a PDF. That content contains earlier product names. Buy Citrix Virtual Apps Essentials directly from the Azure Marketplace, using your Microsoft Azure account. Citrix Virtual Apps Essentials requires at least 25 users. The service is delivered through Citrix Cloud and requires a Citrix Cloud account to complete the onboarding process. See System requirements > Citrix Cloud for details. When buying Citrix Virtual Apps Essentials, ensure that you enter correct information for all details, including address fields, to ensure fast processing of your order. Before you configure Virtual Apps Essentials, ensure that you complete the following in the Azure Marketplace: - Provide contact information and your company details. - Provide your billing information. - Create your subscription. To configure the customer and pricing: - In Select a customer, select the customer name. - Under Pricing, in Number of users, type the number of users who have access to Virtual Apps Essentials. - Under Price per month, select the agreement check box and then click Create. The summary page appears and shows the details of the resource. After your account is provisioned, click Manage through Citrix Cloud. Important: Wait for Microsoft Azure to provision your service. Do not click the Manage through Citrix Cloud link until provisioning is complete. This process can take up to four hours. When you click the link, Citrix Cloud opens in the web browser, and you can begin the configuration process described below. Prepare your Azure subscriptionPrepare your Azure subscription Choose your Azure subscription to be the host connection for your VDAs and related resources. These resources can incur charges based on your consumption. Note: This service requires you to log on with an Azure Active Directory account. Virtual Apps Essentials does not support other account types, such as live.com. To prepare your Azure subscription, configure the following in Azure Resource Manager: Create a resource group and provide: - Resource group name - Subscription name - Location - In Azure Resource Manager, create a virtual network in the resource group and provide a name for the network. You can leave all other default settings. You create a storage account when you create the master image. - Use an existing domain controller or create one. If you create a domain controller: - Use the A3 Standard or any other size Windows Server 2012 R2 virtual machine in the Resource Group and virtual network. This virtual machine becomes the domain controller. If you plan to create multiple domain controllers, create an availability set and put all the domain controllers in this set. - Assign a private static IP address to the network adapter of the virtual machine. You can assign the address in the Azure portal. For more information, see Configure private IP addresses for a virtual machine using the Azure portal on the Microsoft documentation website. - [Optional] Attach a new data disk to the virtual machine to store the Active Directory users and Groups and any Active Directory logs. For more information, see Attach a managed data disk to a Windows VM by using the Azure portal. When you attach the disk, select all the default options to complete the settings. - Add the domain controller virtual machine’s private IP address to the virtual network DNS server. For more information, see Manage DNS servers used by a virtual network (Classic) using the Azure portal (Classic). - Add a public DNS server in addition to the Microsoft DNS server. Use the IP address 168.63.129.16 for the second DNS server. - Add the Active Directory Domain Services role to the domain controller virtual machine. When this step is complete, promote the domain controller virtual machine to a domain controller and DNS. - Create a forest and add some Active Directory users. For more information, see Install a new Active Directory forest on an Azure virtual network. If you prefer to use Azure Active Directory Domain Services instead of a domain controller, Citrix recommends reviewing the documentation Azure Active Directory Domain Services for Beginners on the Microsoft website. Link Your Azure subscription In Citrix Cloud, link your Citrix Virtual Apps Essentials to your Azure subscription. - Sign in to Citrix Cloud. In the upper left menu, select My Services > Virtual Apps and Desktops. - On the Manage tab, click Azure Subscriptions. - Click Add Subscription. The Azure portal opens. - Log on to your Azure subscription with your global administrator Azure credentials. - Click Accept to allow Virtual Apps Essentials to access your Azure account. The subscriptions available in your account are listed. - Select the subscription you want to use and then click Link. - Return to the Virtual Apps Essentials console to see the subscription in a linked state. After you link your Azure subscription to Virtual Apps Essentials, upload your master image. Prepare and upload a master imagePrepare and upload a master image Catalog creation uses a master image to deploy VMs containing applications and desktops. This can be a master image you prepare (with applications and VDA installed), or an image prepared by Citrix. For production deployments, Citrix recommends preparing and using your own master image. Citrix-prepared images are intended only for pilot or test deployments. The first production image must be prepared from the Azure Resource Manager interface. Later, you can create additional images through Azure, as needed. As an alternative to creating additional images through the Azure interface, you can build a new master image from the Virtual Apps Essentials interface. This method uses a previously created master image. You can obtain the network settings from an existing catalog or manually specify them. After you use an existing master image to create a new image, you connect to the new image and customize it, adding or removing apps that were copied from the template. The VDA is already installed, so you don’t have to do that again. This method lets you stay with the Essentials service. You don’t need to navigate to Azure to create the new image, and then return to the Essentials service to import the image. For example, let’s say you have a catalog named HR that uses a master image containing several HR apps. Recently, a new app released that you want to make available to the HR catalog users. Using the build-an-image feature in Virtual Apps Essentials, you select the current master image as a template to create a new master image. You also select the HR catalog so that the new master image uses the same network connection settings. After the initial image setup, install the new app on the new image. After testing, update the HR catalog with the new master image, making it available to that catalog’s users. The original HR master image is retained in the My Images list, in case it’s ever needed again. The following sections describe how to prepare and upload a master image through the Azure interface. For details about building an image from within Virtual Apps Essentials, see Prepare a master image in Virtual Apps Essentials. Procedure summary - Prepare a master image VM in Azure or Virtual Apps Essentials. - Install apps on the master image. - Install a Citrix VDA on the master image. - Upload the master image from Azure Resource Manager to Virtual Apps Essentials (if needed). Citrix recommends installing the latest Current Release (CR) of the server VDA or the latest Cumulative Update (CU) for Server VDA 7.15 Long Term Service Release (LTSR) on Windows Server 2016 or Windows Server 2012 R2 machines. If you have a Windows Server 2008 R2 machine, you must install server VDA 7.15 LTSR (latest CU recommended), which is also available on the download page. See Lifecycle Policy for Citrix Cloud Virtual Apps and Desktops Service to learn about the lifecycle policy for CR and LTSR VDAs. Create a master image VM in Azure - Click Create a Resource in the navigation pane. Select or search for a Windows Server 2008 R2, Windows Server 2012 R2, or Windows Server 2016 entry. Click Create. On the Create virtual machine page, in panel 1 Basics: - Enter a name for the VM. - Select a VM disk type (optional). Create a standard disk. - Enter the local user name and password, and confirm the password. - Select your subscription. - Create a new resource group or select an existing resource group. - Select the location. - Select the resource group and location. - Choose whether you will use a Windows license that you already own. - Click OK. On the Create virtual machine page, in panel 2 Size, choose the virtual machine size: - Select a VM type, then indicate the minimum number of vCPUs and minimum memory. The recommended choices are displayed. You can also display all choices. - Choose a size and then click Select. On the Create virtual machine page, In panel 3 Settings: - Indicate whether you want to use high availability. - Provide the virtual network name, subnet, public IP address, and network security. - Optionally, select extensions. - Enable or disable auto-shutdown, monitoring (boot diagnostics, guest OS diagnostics, diagnostics storage account). - Enable or disable backup. - Click OK. - In panel 4 Summary, click OK to begin creation of the VM. Do not Sysprep the image. Install apps on the master image On the master image VM you just created, add the apps that will be available to users when they log on with the workspace URL. (Later, after you create the catalog that uses this master image, you’ll specify exactly which of these apps will be available to the users you specify.) - Connect to the master image VM after you create it and while it is running. - Install applications. Install a VDA on the master image - Connect to the master image VM (if you’re not already connected). - You can download a VDA for Server OS by using the Downloads link on the Citrix Cloud navigation bar. Or, use a browser to navigate to the Citrix Virtual Apps and Desktops service download page. Download a VDA for Server OS onto the VM. (See guidance above for VDA version information.) -), and then click Next. - Click Finish. The machine restarts automatically. - To ensure that the configuration is correct, launch one or more of the applications you installed. - Shut down the VM. Do not Sysprep the image. Upload the master image In this procedure, you upload the master image from Azure Resource Manager to Virtual Apps Essentials. - If you are not already in Citrix Cloud, sign in. In the upper left menu, select My Services > Virtual Apps and Desktops - On the Manage tab, click Master Images. - Click Add Master Image. - On the Add an image page, specify the location of the image by selecting the subscription, resource group, storage account, VHD, and region. - Enter a name for the master image. - Click Save. The service verifies the master image. After verification, the uploaded image appears under Master Images > My Images. Tip: As an alternative to uploading the master image before creating the catalog, you can import a master image from Azure Resource Manager when you create the catalog. Prepare a master image in Virtual Apps Essentials This method uses an existing master image as a template (and optionally, connection details from an existing catalog) to build another master image. You can then customize the new master image. This procedure is completed entirely through the Virtual Apps Essentials interface. - Sign in to Citrix Cloud, if you haven’t already. In the upper left menu, select My Services > Virtual Apps and Desktops. - Click Manage and then select the Master Images tab. - Click Build Image. - On the Build Image page, in the Select an image panel, select a master image. Specify a name for your new image. Click Next. In the Specify network connectivity settings panel, you can either use the settings from an existing catalog, or you can specify the settings. The settings are: subscription, virtual network, region, subnet, domain, and VM instance type. (If you don’t have a catalog, you must enter the settings.) If you select Copy settings from a catalog, select the catalog. The network connection settings display, so you can visually verify that you want to use them with your new master image. Enter your service account username and password to join the domain. Click Save. If you select Enter new settings, select values in the appropriate settings fields. Enter your service account username and password to join the domain. Click Save. - Click Start Provisioning. - When the new image has been created, it appears in the Manage > Master Images list with a status of Input Required. Click Connect to VM. An RDP client downloads. Use RDP to connect to the newly created VM. Customize the new image by adding or removing applications and other software. As with all master images, do not Sysprep the image. - When you’re done customizing your new image, return to the Manage > Master Images page and click Finish for your new master image. The new image is then sent to the verification process. - When the verification process completes, the new image appears in the My Images list with a status of Ready. Later, when you create a catalog, and select Link an existing image on the Choose master image page, the new image appears among the Image Name choices. Deploy a catalog, publish apps and desktops, and assign subscribersDeploy a catalog, publish apps and desktops, and assign subscribers A catalog lists the apps and desktops that you choose to share with selected users. If you’re familiar with other Citrix app and desktop delivery products, a catalog in this service is similar to combining a machine catalog and a delivery group. However, the machine catalog and delivery group creation workflows in other services are not available in this service. Deploying a catalog and sharing apps with subscribers is a multi-step process. - Create a catalog - Publish apps and assign subscribers for that catalog - Test and share the workspace link your subscribers will use Create a catalog When creating a catalog, have Azure Active Directory account credentials and your subscription name available. - If you are not already in Citrix Cloud, sign in. In the upper left menu, select My Services > Virtual Apps and Desktops. - On the Manage tab, click Catalogs and then Add Catalog. - Provide information in the following panels. Click Save when you’re done with each panel. A warning sign appears in a panel’s header if required information is missing or invalid. A check mark indicates that the information is complete. Pick a name - Type a 2-38 character name for the catalog. (Letters and numbers only, no special characters.) This name is visible only to administrators. - Select Domain Joined if it isn’t already selected. A domain-joined deployment allows VDAs to join Active Directory. Later, you provide an Azure virtual network that is connected to your domain. If you don’t have a domain, you can use Azure Active Directory Domain services. - Click Save. Link your Azure subscription - Select your Azure subscription. When you link a new Azure subscription, the Azure sign-in page appears for authentication of your Azure credentials. After signing in, accept the service consent to manage your subscription. Then, you can link a subscription. Virtual Apps Essentials requires you to log on with an Azure Active Directory account. Other account types (such as live.com) are not supported. - Select your resource group, virtual network (VNET), and subnet. The VNET determines the Azure region where your resources are deployed. The subnet must be able to reach your domain controller. - Click Save. Join local domain Enter domain information: - Fully Qualified Domain Name: Enter the domain name. The name must resolve from the DNS provided in the virtual network. - Organizational Unit: (optional) Ensure that Active Directory contains the specified OU. If you leave this field blank, machines are placed in the default Computers container. - Service Account Name, Password, and Confirm Password: Enter the User Principal Name (UPN) of the account that has permissions to add machines to the domain. Then enter and confirm the password for that account. Click Save. You can test connectivity through the virtual network by creating a VM in your Azure subscription. The VM must be in the same resource group, virtual network, and subnet that you use to deploy the catalog. Ensure that the VM can connect to the internet. Also ensure that you can reach the domain by joining the VM to the domain. You can test using the same credentials that were used for deploying this catalog. Connect to a resource location Each resource location must have two or more Cloud Connectors, which communicate with Citrix Cloud. The service handles the Cloud Connector deployment automatically when a catalog deploys. The two Windows Server VMs are created in Azure Resource Manager and then a Cloud Connector is installed automatically on each server. If the selected resource location is available, connection occurs automatically. Simply click Save. To create a resource location, enter a name for it. - To create Cloud Connectors in a specific Azure resource group, click Edit next to Azure Resource Group to change the resource location. Otherwise, the service uses the resource group you specified when you linked your Azure subscription. - To put the Cloud Connectors into a separate OU, click Edit next to Organizational Unit to change the OU. Otherwise, Virtual Apps Essentials uses the resource group you specified when you linked your Azure subscription. Choose a master image Select one of the following: - Link an existing image: Use this option if you previously imported a custom image and want to use it with this catalog. Select the image and optionally, a region. - Import a new image: Use this option if you want to use a custom image with this catalog, but have not yet imported it. Select the subscription, resource group, storage account, and VHD. Enter a friendly name for the image. - Use a Citrix prepared image: Use this option to test the service without using your own custom image. These images are suitable only for demonstration environments, and are not recommended for production. Select a prepared image. Click Save. Pick storage and compute type Configure the following items: - Standard or premium disks: Standard disks (HDD) are backed by magnetic drives. They are preferable for applications where data is accessed infrequently. Premium disks (SDD) are backed by solid state drives. They are ideal for I/O-intensive applications. - Use Azure Managed Disks or unmanaged disks: Learn more about Azure Managed Disks at. Azure Hybrid Use Benefit: Select whether or not to use existing on-premises Windows Server licenses. Enabling this feature and using existing on-premises Windows Server images uses Azure Hybrid Use Benefits (HUB). For details, see. HUB reduces the cost of running VMs in Azure to the base compute rate, because it waives the price of additional Windows Server licenses from the Azure gallery. You need to bring your on-premises Windows Servers images to Azure to use HUB. Azure gallery images are not supported. On-premises Windows Client licenses are currently not supported. See Azure Hybrid Benefit for Windows Server on the Microsoft web site. - Pick a virtual machine size: Select a worker role (for example, task, office, knowledge, power). The worker role defines the resources used. When you specify a worker role, the service determines the correct load per instance. You can select an option or create your own custom option. Click Save. Manage costs with power management settings Enter the following information: Scale settings: - Minimum number of running instances: The service ensures that this many VMs are powered on all the time. - Maximum number of running instances: The service does not exceed this number of VMs. - Maximum concurrent users: The service does not allow concurrent users beyond this limit. Capacity buffer: Enables extra sessions to be ready for demand spikes, as a percentage of current session demand. For example, if there are 100 active sessions and the capacity buffer is 10%, the service provides capacity for 110 sessions. As the total session capacity changes, the number of running instances for this catalog scales up or down. The number of running instances always stays within the configured minimum and maximum values. A lower capacity buffer percentage can result in a decreased cost. However, it might also result in some sessions having an extended logon time if several sessions start concurrently. Schedule for peak time: Select this option if you want a different number of VMs running during peak times than in non-peak times. Select the days of the week for the peak time, start and end times, and time zone. Specify the minimum number of running instances during peak time. Idle or disconnected session time-out: Set the time for when the session ends. User sessions end automatically if the session remains idle or is disconnected for the specified time period. Shorter time-out values allow unused VDAs to power off and save costs. Click Save. Deploy the catalog After you complete the configuration panels, click Start Deployment to start the catalog creation. Creating a catalog can take 1 to 2 hours (or longer, if you specified a large number of VMs). When a catalog is created: - A resource group (and a storage account in that resource group) for the workload machines are created automatically in Azure. - The VMs are named Xenappxx-xx-yyy, where xx is derived from an environmental factor and yy is an ordinal number. Publish apps and assign subscribers for a catalog To complete the catalog after it is deployed, you must publish one app or desktop, and assign at least one subscriber. The image you used to create the catalog includes the applications (or desktop) that you can publish. You can select applications from the Start menu or specify a directory path on the machine. - If you are not already in Citrix Cloud, sign in. In the upper left menu, select My Services > Virtual Apps and Desktops. - On the Manage tab, click Catalogs. - In the ellipsis menu (…) for the catalog that was created, select Manage Catalog. Select Publish Apps and Assign Subscribers. The following page displays. - In the Publish Apps and Assign Subscribers dialog box, click Publish. The Publish to catalog-name page contains three choices. Complete at least one. Optionally, you can then choose another (for example, to publish both apps and desktops using this catalog). - To publish apps located on the Start menu: - Select Publish from Start Menu. - Select the applications from the list. - To publish apps by specifying their location and other information: - Select Publish using Path. - Enter each application’s name and path (for example, c:\Windows\system1\app.exe). - Optionally, enter a description that will appear in the user’s workspace, command line parameters, and working directory. - To change the icon that represents the published app, click Change icon and then navigate to the location of the icon. A message appears if the selected icon cannot be extracted. In that case, you can retry or continue using the existing icon. Click Publish App. - To publish a desktop: - Select Publish desktop. - Enter the name of the desktop. - Optionally, enter a description that will appear in the user’s workspace. Click Publish Desktop. After you add apps or desktops, they appear in the list under the selectors. To delete an app or desktop you added, select the button to the left of the entry (or click the trash icon next to the entry) and then click Remove. Later, if you want to unpublish an app or desktop, select the button to the left of the entry and then click Unpublish. In the Publish Apps and Assign Subscribers dialog box, click either Manage App Subscribers or Manage Desktop Subscribers. - Select a domain and then search for a user or user group. User assignments for apps and desktops are separate. To assign a user access to both apps and desktops, assign that user with Manage App Subscribers and with Manage Desktop Subscribers. After you add a user or group, it appears in the list under the selectors. To delete a user or group you selected, click the trash can icon next to the entry and click Remove. Later, if you want to remove users, select the button to the left of the entry and then click Remove Selected. Test and share the workspace link After you deploy a catalog, publish apps, and assign subscribers, you’re provided the link that your subscribers use to access the apps and desktops you published for them. - If you are not already in Citrix Cloud, sign in. In the upper left menu, select My Services > Virtual Apps and Desktops. - On the Manage tab, click Catalogs. - In the ellipsis menu (…) for the catalog, select Manage Catalog. - Select Test and Share Workspace Link. In the following graphic, the workspace link appears in the circled area. Share this link with your subscribers. The right portion of the page lists the workspace URL, plus information about the catalog’s master image, resource location, Azure subscription, and domain. See Workspace experience for more information. Update master images and catalogsUpdate master images and catalogs To update or add applications, update the virtual machine that you used to create the catalog’s master image. Update the master image - Power on the master image VM. Powering on the machine does not affect the master image installed in Azure Resource Manager. - Install any updates or applications on the VM. - Shut down the VM. - In the Virtual Apps Essentials console, add the new image that includes the path to the VM’s VHD image. Update a catalog with a new image - If you are not already in Citrix Cloud, sign in. In the upper left menu, select My Services > Virtual Apps and Desktops. - On the Manage tab, click Catalogs. - Click the ellipsis menu for the catalog and then click Update Catalog Image. - Select either Link an existing image or Import a new image. Enter the information that is appropriate for your choice. - In Time until automatic log-off, choose the amount of time before the session ends. - Click Update. When you start the catalog update, users can continue to work until the initial processing completes. Then, users receive a warning message to save their work and close applications. After closing all active sessions on the VDA, the update finishes on that VDA. If users do not log off in the amount of time given, the session closes automatically. Update the number of VDAs in a catalog - If you are not already in Citrix Cloud, sign in. In the upper left menu, select My Services > Virtual Apps and Desktops. - Click the Manage tab. - On the Catalogs tab, select a catalog. - On the Capacity tab, under Select scale settings, click Edit. - Change the Maximum number of running instances value to the desired VDA count for the catalog. - Click Save. Monitor machine statesMonitor machine states When you select a catalog, the Machines tab on the catalog summary page lists all of the machines in that catalog. The display includes each machine’s power and registration states, and the current session count. You can turn maintenance mode on or off for a machine. Turning on maintenance mode prevents new connections from being made to the machine. Users can connect to existing sessions, but they cannot start new sessions. You might want to place a machine in maintenance mode before applying patches. If you turn on maintenance mode for one or more machines, Smart Scale is temporarily disabled for all machines in that catalog. Either of the following actions will enable Smart Scale again: - Click Enable Smart Scale in the warning at the top of the screen. This action automatically turns off maintenance mode for all machines in the catalog that have maintenance mode turned on. - Explicitly turn off maintenance mode for each machine that currently has maintenance mode turned on. Monitor the serviceMonitor the service - If you are not already in Citrix Cloud, sign in. In the upper left menu, select My Services > Virtual Apps and Desktops. - Click the Monitor tab. Session information To monitor the overall performance of Citrix Virtual Apps Essentials: Select the catalog that you want to monitor. You can view information on sessions, logon duration, and other information. Choose a session and then: - Disconnect the session - Log off from the session - Send a message Click each session to view extra details about the session such as processes, applications running, and more. Usage information Usage information shows aggregated data for all catalogs (rather than a specified catalog). - Usage Overview displays the total number of application launches and the number of unique users who launched apps over the past six weeks. - Top Apps lists the most frequently used apps for the current and previous months. Hovering over an entry displays the number of times that application was launched. - Top Users lists the top ten users for the current and previous months, with the number of times they launched applications. Weekly data intervals are Monday (UTC 00:00) through the query time. Monthly data intervals are the first day of the month (UTC 00:00) through the query time. on and log off experience. The profile optimization service requires a file share where all the personal settings persist. You must specify the file share as a UNC path. The path can contain system environment variables, Active Directory user attributes, or Profile Management variables. To learn more about the format of the UNC text string, see To specify the path to the user store. You configure Profile Management in Citrix Cloud. To configure Profile Management - If you are not already in Citrix Cloud, sign in. In the upper left menu, select My Services > Virtual Apps and Desktops. - On the Manage tab, click Catalogs. - Click the name of the catalog. - Click More Settings. - In Set up Profile Management in Azure subscription, enter the path to the profile share. For example, \fileserver\share#sAMAccountName# - ClickConfigure the Microsoft RDS License Server Citrix Virtual Apps Essentials accesses Windows Server remote session capabilities that would typically require a Remote Desktop Services client access license (RDS CAL). The VDA must be able to contact an RDS license server to request RDS CALs. Install and activate the license server. For more information, see Activate the Remote Desktop Services License Server. For proof of concept environments, you can use the grace period provided by Microsoft. With this method, you can have Virtual Apps Essentials apply the license server settings. You can configure the license server and per user mode in the RDS console on the master image. You can also configure the license server using Microsoft Group Policy settings. For more information, see Specify the Remote Desktop Licensing Mode for an RD Session Host Server. - If you purchased CAL licenses from Microsoft Remote Access, you do not have to install the licenses. You can purchase licenses from Microsoft Remote Access in the Azure Marketplace, along with Virtual Apps Essentials. To configure the RDS license server - If you are not already in Citrix Cloud, sign in. In the upper left menu, select My Services > Virtual Apps and Desktops. - On the Manage tab, click Catalogs. - Select the catalog and then select More Settings. - In Enter the FQDN of the license server, type the fully qualified domain name of the license server. - Click Save. Connect usersConnect users Workspace experience Virtual Apps Essentials in Citrix Cloud enables the workspace experience for each customer. After you create the first catalog, Virtual Apps Essentials configures the workspace URL automatically. The URL is the one from which users can access their applications and desktops. The workspace URL appears in the catalog details panel on the Summary tab. Virtual Apps Essentials does not support on-premises StoreFront deployments. After creating a catalog, you can use Workspace Configuration to customize the workspace URL and the appearance of workspaces. You can also enable the preview version of federated authentication using Azure Active Directory. Enabling federated authentication using Azure Active Directory includes the following tasks: - Set Azure AD as your identify provider. For more information, see Connect Azure Active Directory to Citrix Cloud. - Enable Azure AD for authentication to the Citrix Workspace experience. For more information, see Workspace configuration. Citrix Gateway service To allow users secure access to their published apps, Virtual Apps Essentials uses the Citrix Gateway service. This service does not need any configuration by you. Each user is limited to 1-GB outbound data transfer per month. You can purchase a 25 GB add-on from the Azure Marketplace. The charge for the add-on is on a monthly basis. Cancel Virtual Apps EssentialsCancel Virtual Apps Essentials You can incur Azure charges from Virtual Apps Essentials because of the following elements: - Virtual Apps Essentials subscription - Azure resource created by Virtual Apps Essentials The Microsoft Azure charge for the Virtual Apps Essentials service is on a monthly basis. When you purchase Virtual Apps Essentials, you are charged for the current month. If you cancel your order, your service will not renew for the next month. You continue to have access to Virtual Apps Essentials until the end of the current month by using Citrix Cloud. Your Azure bill can contain multiple line items for Virtual Apps Essentials, including: - Virtual Apps Essentials service subscription - Citrix Gateway service add-on, if purchased - Microsoft Remote Access fee - Azure resource created when using Virtual Apps Essentials Cancel Virtual Apps Essentials in Azure To cancel your Virtual Apps Essentials subscription, delete the order resource in the Azure portal. - Click All Resources. - In the Type column, double-click to open Citrix Virtual Apps Essentials. - Click the trash icon. The delete process starts. Delete the Azure resources created by Virtual Apps Essentials In Citrix Cloud, delete the catalogs and images associated with your account. Also, remove the subscription links and ensure the removal of the Cloud Connector VMs from Citrix Cloud. If you are not already in Citrix Cloud, sign in. In the upper left menu, select My Services > Virtual Apps and Desktops. To delete catalogs - On the Manage tab, click Catalogs. - In the ellipsis menu (…) next to the catalog you want to remove, select Delete Catalog. - Repeat the previous step for each catalog you want to delete. To remove master images - On the Manage tab, click Master Images. - Select an image and click Remove. - Repeat the previous step for each master image you want to delete. To remove links to Azure subscriptions - On the Manage tab, click Subscriptions. - Click the trash icon next to the subscription. The Azure portal opens. - Sign in to your Azure subscription, using your global administrator Azure credentials. - Click Accept to allow Virtual Apps Essentials to access your Azure account. - Click Remove to unlink the subscription. - Repeat the preceding steps for other linked Azure subscriptions. To ensure removal of the Citrix Cloud Connector VMs - In the upper left menu, select Resource Locations. - Identify the Cloud Connector VMs. - Delete the VMs from the Resource page in Azure. Partner resourcesPartner resources This service is now available through the Microsoft Cloud Solution Provider channel. For details, see Microsoft CSP enablement for Citrix Essentials. Get helpGet help If you have problems with Virtual Apps Essentials, open a ticket by following instructions in How to Get Help and Support. More informationMore information For information about using Citrix policies in a Virtual Apps Essentials environment, see CTX220345. To troubleshoot catalog creation failures, see CTX224151. Upgrade to Citrix Virtual Apps and Desktops Standard for AzureUpgrade to Citrix Virtual Apps and Desktops Standard for Azure Learn how to upgrade from Citrix Virtual Apps Essentials to Citrix Virtual Apps and Desktops Standard for Azure. In this article - Deployment architecture - Deployment summary - What’s new - System requirements - Known issues - How to buy the service - Prepare your Azure subscription - Prepare and upload a master image - Deploy a catalog, publish apps and desktops, and assign subscribers - Update master images and catalogs - Monitor machine states - Monitor the service - Profile Management - Configure the Microsoft RDS License Server - Connect users - Cancel Virtual Apps Essentials - Partner resources - Get help - More information - Upgrade to Citrix Virtual Apps and Desktops Standard for Azure
https://docs.citrix.com/en-us/citrix-cloud/citrix-virtual-apps-essentials
2021-06-12T15:16:02
CC-MAIN-2021-25
1623487584018.1
[array(['/en-us/citrix-cloud/media/apps-essentials-basic.png', 'Virtual Apps Essentials standard deployment'], dtype=object) array(['/en-us/citrix-cloud/media/apps-essentials-onprem.png', 'Virtual Apps Essentials on-premises deployment'], dtype=object) array(['/en-us/citrix-cloud/media/xae-123.png', 'Virtual Apps Essentials task list'], dtype=object) array(['/en-us/citrix-cloud/media/xae-cat-pickaname.png', 'Virtual Apps Essentials Pick a name page'], dtype=object) array(['/en-us/citrix-cloud/media/xae-cat-link-azure-sub.png', 'Virtual Apps Essentials link Azure subscription page'], dtype=object) array(['/en-us/citrix-cloud/media/xae-cat-join-local-domain.png', 'Virtual Apps Essentials join local domain page'], dtype=object) array(['/en-us/citrix-cloud/media/xae-cat-master-image.png', 'Virtual Apps Essentials choose a master image page'], dtype=object) array(['/en-us/citrix-cloud/media/storage-compute-type-75.png', 'Virtual Apps Essentials pick storage and compute type page'], dtype=object) array(['/en-us/citrix-cloud/media/power-mgt-75.png', 'Virtual Apps Essentials power management settings page'], dtype=object) array(['/en-us/citrix-cloud/media/xae-workspace-link.png', 'Workspace link highlighted'], dtype=object) array(['/en-us/citrix-cloud/media/cat-machines.png', 'Catalog page'], dtype=object) array(['/en-us/citrix-cloud/media/maint-mode-smart-scale.png', 'Smart Scale maintenance mode page'], dtype=object) ]
docs.citrix.com
MIC-1 The mic1 package provides tools for working with the MIC-1 processor architecture that appears in Andrew S. Tanenbaum’s textbook Structured Computer Organization. 1 MIC-1 Description The MIC-1 is a CPU with 16 general purpose 16-bit registers. Registers 5, 6, 7, 8, and 9 have default values 0000000000000000, 0000000000000001, 1111111111111111, 0000111111111111, and 0000000011111111 respectively. It runs a single 256-instruction microprogram embedded in a control store ROM. Its ALU supports addition, bitwise AND, and bitwise negation. The ALU outputs flags for whether its result was negative or zero. The ALU is connected to a 1-bit shifter that can shift left, right, or not at all. Its memory interface is two flags (one for reading, one for writing) as well as two 16-bit registers for interfacing with memory (the MAR–Memory Address Register–and MBR–Memory Buffer Register.) The top 4 bits of the MAR is ignored, so the MIC-1 has a 12-bit address space. Memory access is delayed by one cycle, during which the appropriate flag must be asserted. If both flags are asserted, then the external controller halts the machine. The ALU’s A side is either a register or the MBR. The shifter result may be output to the MBR or any register. The MAR may be written from the ALU’s B side. The top four words of memory (4092-4095) are wired to a UART. The first two connect to the receiver and the second two connect to the transmitter. The first of each holds an 8-bit character to be outputed in the bottom 8 bits. The second of each holds a 4 bit control flag in its lowest bits. The control bits are (from most to least significant): On, Interrupt, Done, Busy. The control bits are initialized to all zero. If the microprogram sets the On bit, then the component is enabled and stabilizes. The receiver stabilizes to not Done and Busy, while the transmitter stabilizes to Done and not Busy. When the receiver receives a character, it switches to Done and not Busy until the character is read by the CPU. When the program writes a character to the transmit buffer while the transmitter is On, then the transmitter switches to not Done and Busy, until the transmission is finished. The Interrupt flag is currently ignored by both components. 2 mic1 simulator raco mic1 ‹option› ... ‹microcode-path› ‹memory-image-path› simulates the execution of the MIC-1. ‹microcode-path› must be a path to a file. If the extension is .prom, then it must be in the Microcode Image format. If the extension is .mc, then it must be in the MAL microcode language format and it will be compiled before loading. ‹memory-image-path› must be a path to a file. If the extension is .o, then it must be in the Memory Image format. If the extension is .s, then it must be in the MAC-1 macro-assembly format and it will be compiled before loading. It accepts the following ‹option›s: --ll — simulates at the NAND gate level via compilation to a C program using cc. --lli — simulates at the NAND gate level via an interpreter. --hl — simulates at a high-level (default) --pc ‹pc-str› — specifies the initial value of the register 0, the Program Counter (default: 0) --sp ‹sp-str› — specifies the initial value of the register 2, the Stack Pointer (default: 1024) 2.1 Microcode Image A microcode image matches the grammar ‹PROM›. In addition, a microcode image may only contain up to 256 ‹mir› lines. 2.2 Memory Image A memory image matches the grammar ‹Image›. In addition, a memory image may only contain up to 4096 ‹value› lines. 3 mcc microcode compiler raco mcc ‹microcode-path› compiles MAL microcode language into the Microcode Image format. ‹microcode-path› must be a path to a file in the MAL microcode language format. raco mcc replaces the extension of this path with .prom and writes the corresponding Microcode Image. 3.1 MAL microcode language While it is possible to directly write in the Microcode Image format, it is extremely error-prone and tedious. MAL provides a convenient way to write microprograms. MAL supports block comments in between { and }. Labels are sequences of any characters except (,:;). A MAL program matches the following grammar ‹Program›: ‹Instruction›s are composed of multiple ‹Component›s. Each ‹Component› determines some fields of the Microcode Image. If two ‹Component›s assign the same field differently, then a compilation error is raised. The following grammar specifies the various ‹Component›s: The remaining nonterminals are specified by the following grammar: If a MAL program produces an image greater than 256 instructions, then no error is raised during compilation. For examples see the Github repository, specifically: fib.mc implements Fibonacci and macro-v1.mc implements an interpreter for compiled MAC-1 macro-assembly. 4 masm macroassembler raco masm ‹asm-path› compiles MAC-1 macro-assembly into the Memory Image format. ‹asm-path› must be a path to a file in the MAC-1 macro-assembly format. raco masm replaces the extension of this path with .o and writes the corresponding Memory Image. 4.1 MAC-1 macro-assembly The MAC-1 is a low-level virtual machine implemented by a MIC-1 microprogram. It exposes a single register (AC) to programmers and has an internal state defined by two other registers (PC and SP). The assembly language supports line comments starting with the ; character. Whitespace is never significant. Literal integers are supported in decimal format. Literal strings compile to packed 16-bit words with early characters in least significant bits. Labels are any alphanumeric character sequence starting with an alphabetic character and ending in :. A label definition is a label not in an argument position or immediately after a label definition. The character sequence .LOC followed by a literal nonnegative integer skips the given amount of space in the resulting image, filling it with 1111111111111111. The following instructions are recognized: If a MAC-1 program produces an image greater than 4096 instructions, then no error is raised during compilation. For examples see the Github repository, specifically: fib.s implements Fibonacci and IO_str_and_echo.s implements an echo program. 5 HDL - General Purpose Hardware Description Language The implementation contains a general purpose hardware description language that compiles circuits to networks of NAND gates. The network can be simulated in Racket or via C compilation. In the future it may be documented.
https://docs.racket-lang.org/mic1/index.html
2021-06-12T14:00:45
CC-MAIN-2021-25
1623487584018.1
[]
docs.racket-lang.org
Now that you are familiar with ways to find and explore data in KBase, you can select or upload data to analyze. The Data Panel in a Narrative shows the data objects that are currently available in that particular Narrative. From the Data Panel, you can access the data slide-out, which allows you to search for data of interest and add it to your Narrative. In the Data Panel, click the red "Add Data" button, the red “+” button, or the right arrow at the upper right of the panel to access the Data Browser slide-out. Data Privacy Any data that you upload to KBase is kept private unless you explicitly choose to share it. You can share any of your Narratives (including their associated data) with one or more specific users, or make it publicly available to all KBase users. Please see the Sharing page for more information about how to do that. The Terms and Conditions page describes the KBase data policy. The first four tabs of the Data Browser (My Data, Shared With Me, Public, and Example) let you search data that is already in KBase. The Import tab lets you import data from your computer to your Narrative so that you can analyze it in KBase. The My Data tab shows data objects that you have added to your project. You may need to refresh this tab to see your most recently added data. The Shared With Me and Public tabs display datasets that others have loaded and made accessible to you (or to everyone). Data within each group is searchable and can be filtered. Since there are a large number of public datasets, you may wish to filter them by data type (using the pulldown selector on the left) or narrow the list by searching (in the “Search data” text box) for specific text in the data objects’ names. The Example tab shows datasets that have been pre-loaded for use with particular apps. These can be handy for trying out the Narrative Interface. The Import tab allows you to upload your own datasets for analysis. This is explained in more detail below. If you hover your cursor over any data object under the first four tabs, options will appear allowing you to add that object to the Narrative or find out more about it. In the previous section, we described the process of adding a genome to your Narrative from the public data in KBase. Now let’s check out the different data types available under the Example tab. The icon to the left of each data object represents its data type. As described in the previous section, the blue "< Add" button next to these icons lets you add the data object to your Narrative. You can add more more data to your Narrative from the Example tab to try out various KBase apps. The Import tab lets you drag & drop data from your computer into your Staging Area to import into your Narrative, where you can then analyze it using KBase’s analysis apps. To upload data from your computer (or a Globus endpoint or URL), choose the rightmost tab of the Data Browser to open the Import tab. You can then click the “?” icon just below the drop zone to the right to launch a short interactive tour that shows the different parts of this user interface. Getting data from your computer to your KBase Narrative is a three-step process: Drag and drop the data file(s) from your computer to the new Import tab to upload them to your Staging Area. Choose a format for importing data from your Staging Area into your Narrative. Run the Import app that is created. What is a Staging Area? Your Staging Area is a sort of “halfway house” for your uploaded data files. It is private to you–no one else can see the data in your Staging Area. The files in your Staging Area stay there and can be seen from any of your Narratives. When you’re ready to add data to a Narrative, you can choose a data type from the pulldown menu next to a data file in your staging area in order to add it to your Narrative as an object of that type–see below for instructions. Why have a Staging Area? It’s more robust and extensible this way. Unlike our previous importer, the staging upload can handle large data files without timing out. The user interface is more intuitive: you can drag and drop one or more data files into your Staging Area, or even whole directories or zip files. Finally, the new importer is easier for our developers to extend to new data types. What's the difference between 'Upload' and 'Import'? In this documentation, we will use “Upload” to refer to getting data from your computer into your Staging Area and “Import” to refer to the process of converting a data file from your Staging Area into a data object in your Narrative that you can analyze with any of our analysis apps. Sometimes we use the term “Import” to cover the whole Upload+Import process. Drag & Drop Limitations The drag & drop from your local computer works for many files, but there is a size limit that depends on your computer and browser. Some users have reported problems around 20GB. For larger files, use the Globus Online transfer. Find the file(s) you want to import into your KBase account, and drag them into the drop zone (the rectangular area surrounded by a dashed line). You can select multiple files from your computer and drag them all at once. (In the example below, the user is dragging two files into the dashed area.) You can also select a folder of data files and drag the folder into the Staging Area drop zone. If you don’t like using drag & drop, you can instead click in the upload area to open a file chooser and select a file from your computer to upload. While files are being uploaded to your Staging Area, you’ll see a green progress bar. When the file is done uploading, you will see it appear in the list of files in your Staging Area. If you don’t see your file, try clicking the reload icon on the left above the file list to refresh the view. By default, these are sorted by age, with the most recently uploaded file at the top. To sort the list by other fields, such as name or size, click a column header. 90-day lifetime for files in your Staging Area Your Staging Area is meant to be a temporary holding area for data you want to import into your KBase account. After adding files to your Staging Area, be sure to import them into your Narrative soon, as files in the Staging Area are automatically removed after 90 days. (Data objects that you have imported to your Narrative last indefinitely.) Globus is a data management and file transfer system that can facilitate bulk transfer of data (either large data files or a large number of files) between two endpoints. The endpoints that apply here are KBase, JGI, and your local computer. The KBase endpoint is called “KBase Bulk Share,” and JGI has their own way to link to Globus. To do any transfer using Globus, you will need a Globus account. See Transferring Data with Globus for more documentation on using Globus. Uploading data from JGI If you are a JGI user, you can transfer public genome reads and assemblies (as well as your private data and annotated genomes) from JGI to your KBase account—see the JGI data transfer page for instructions. Below the link to Globus, another link says “Click here to use an App to upload from a public URL” (for example, a GenBank ftp URL, or a Dropbox or Google Drive URL that is publicly accessible). Clicking this link adds the “Upload File to Staging from Web” App to your Narrative: There are also several apps that import specific file types (single- or paired-end reads or SRA files) from a URL directly to your Narrative, bypassing your Staging Area. These are available from the Apps Panel and the App Catalog. The files in your Staging Area are ready to import into your Narrative as KBase data objects that can be used in your analyses. To import a file from your Staging Area, choose a format (data type) from the pulldown menu to the right of the file’s age. (You can find out more about KBase data types and accepted formats in the Upload/Download Guide.) Then click the import icon to the right of the format menu. When you click the import icon, the Data Browser slides shut and an Import app cell (tailored to the chosen format) is created in your Narrative, with the appropriate parameters filled in. For example, here’s an import app created by choosing “GenBank” as the import format: Importing different types of data This example shows how to import GenBank data. Please see the Upload/Download Guide for detailed instructions for other supported data types. If the GenBank file came from a different source, use the pulldown menu to select it. You can change the output object name, if desired, and then click the "Run" button to start the import. When the import is done, you should see the message “Finished with success” near the top of the app cell, and some information about the app run. If you look at your Data Panel, you should see the new data object created by the import. You can now use this data object as input into the relevant KBase apps. If you want to see which apps accept a particular data type as input, you can click the “…” menu in the data object cell that appears when you hover over it, and then use the “Show Apps with this as input” icon to filter the apps in the Apps Panel. What if my import fails? Sometimes an import doesn’t work. One of the most common causes of failure is attempting to import a file that’s the wrong data type, or not the expected format for that data type. For example, the screenshot below shows what a user got when they tried to import a GenBank file as Media. You can also look at common import errors and their meanings. If the importer objected to something in your file, check the Data Upload/Download guide for details about the relevant format. In some cases, the cause of an import error will not be obvious. If you can’t figure out why your import isn’t working, please contact us (via the Help Board) for help. Note, however, that no one besides you has access to your Staging Area, so we will not be able to see the files you uploaded to your Staging Area. You may need to attach your input file to your Help Board ticket in order for us to diagnose the problem. The list of files in your Staging Area includes their name, size, and age (from when they were uploaded). If you have a lot of files in your Staging Area, you may want to use the Search box to locate specific files. Compressed or zipped files have a little double-arrow icon next to the filename. You can click that icon to unpack these files. For how to upload and import file types, follow this guide. For more information about a file in your staging area, click the arrow to the left of the filename to open a tab like this: You can click “First 10 lines” or “Last 10 lines” to see that portion of the file: Opening the information about a file in your Staging Area also reveals a trash can icon that allows you to remove the file from your staging area. You will be asked to confirm that you want to delete the file. This action is not reversible. Note that if you had already imported the file to your Narrative as a data object, that object won’t go away when you delete the file in your Staging Area. If you want to delete a data object, you can do that in your Data Panel. In progress KBase’s import functionality and user interface are under active development. We welcome your bug reports and suggestions for future upload functionality. Once you have added data–your own data or reference data that is already in KBase–to your Narrative, you will be ready for the exciting part: analyzing it! The next two sections describe how to choose and run an app to analyze your data.
https://docs.kbase.us/getting-started/narrative/add-data
2021-06-12T14:50:27
CC-MAIN-2021-25
1623487584018.1
[]
docs.kbase.us
VIP Dashboard The VIP Dashboard is the home for managing your VIP Go applications. Upon signing in, a list of all your apps is visible: For updates and news on VIP, click on the info icon in the upper right-hand corner. You will also find shortcuts to documentation. User access and authentication GitHub is used to authenticate access to the VIP Dashboard. In order for a user to have access to the VIP Dashboard for an application, the user must have access to the GitHub repo for that application. Some parts of the dashboard require read access to the repository to view, while other parts, and many actions, require write or admin access. An “ e002 User Not Found!” error indicates that we were unable to find any VIP Go repositories the user has access to. If the GitHub user account does not have access to any VIP Go repositories, authentication will not be successful. Tools Dashboard Displays an activity log for each of the app’s environments. Click on the shortcuts for each environment to see the corresponding WordPress login and GitHub repositories. Health A snapshot of the health of your WordPress applications running on the VIP Platform. Data Sync Features a real-time data syncing tool to copy content from production to a child environment. Find out more about data sync here. Domains Navigate a list of your domains available to the app. Switch environments and see which domains are associated with each environment, map domains, view DNS Instructions, and provision Let’s Encrypt Certificates. WP-CLI The VIP-CLI tool offers a command line interface for interacting with your applications on the VIP Go platform. Click on your profile icon to set up the VIP-CLI. IP Allow List Manage who can access your app via an IP Allow List. Basic Authentication Manage Basic HTTP Authentication credentials for your app HTTP request log shipping Ship your HTTP logs to an AWS S3 bucket of your choice. As we continue to design and develop new platform functionality, we are looking for testers to provide feedback on early ideas and prototypes. If you would like to get involved, please get in touch via Zendesk.
https://docs.wpvip.com/technical-references/vip-dashboard/
2021-06-12T14:55:14
CC-MAIN-2021-25
1623487584018.1
[array(['https://wpvip.com/wp-content/uploads/2018/03/vipdashboard.png?w=280', None], dtype=object) array(['https://wpvip.com/wp-content/uploads/2018/03/vipdashboardapp.png?w=280', None], dtype=object) ]
docs.wpvip.com
Resetting a password Note: Password resets are disabled for (Single Sign-On) SSO users. SSO password resets should be handled through the SSO administrator outside the BetterExaminations application Description Users with a Role that has "Manage organisation settings" enabled can reset passwords on a user's behalf. If you're unsure whether your Role has this setting enabled you can check in the Role Permissions section of the Edit User modal: Resetting a password In Manage User page select the Actions button and from the dropdown select Reset Password: The following confirmation screen will appear: When the Reset password button has been selected an email will be sent to the User and a success notification will appear in the bottom left of the screen: The user will receive an email with instructions on resetting their password:
https://docs.betterexaminations.com/betterexaminations-online/exam-administrators/managing-users/resetting-a-password/
2021-06-12T14:10:42
CC-MAIN-2021-25
1623487584018.1
[array(['https://downloads.intercomcdn.com/i/o/290367177/8574100d06797f3b194c23c1/BetterExaminations_Online.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/290370109/8a419ee9ec84c4af0cd754f9/Fullscreen_21_01_2021__16_47.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/290369493/39e27775e030d5f46f3c9599/BetterExaminations_Online.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/290373375/c433369a4a9c9e9605dc0c92/Fullscreen_21_01_2021__16_51.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/290374312/d615eac18a654099a1ac3b54/image.png', None], dtype=object) ]
docs.betterexaminations.com
3.3.2.5 <topichead> The <topichead> element provides a title-only entry in a navigation map, which should appear as a heading when the map is rendered as a table of contents. In print contexts it should also appear as a heading in the rendered content. <navtitle>element within the <topicmeta>element, so the <topichead>element no longer requires the @navtitleattribute. In order to ensure backward compatibility with earlier versions of DITA, the new <navtitle>element is not required. However, a <topichead>element must contain either a @navtitleattribute or a <topicmeta>element that contains a <navtitle>element. DITA processors SHOULD generate a warning if a navigation title is not specified. Content models See appendix for information about this element in OASIS document type shells..
https://docs.oasis-open.org/dita/dita/v1.3/errata02/os/complete/part2-tech-content/langRef/base/topichead.html
2021-06-12T15:23:34
CC-MAIN-2021-25
1623487584018.1
[]
docs.oasis-open.org
Keywords will help your customers to find an article they are interested in. Check these steps to complete it: 1. Open the Help Center page 2. Go to your article collection and select the article you would like to add the keywords to. 3. Scroll down the right editing window and type in any keyword you would like into the field called Add a new keyword > click Enter. Your keywords will be saved there automatically. Now you know how to add keywords to an article. 🙂
https://docs.customerly.help/help-center/how-to-add-keywords-to-an-article
2021-06-12T15:18:57
CC-MAIN-2021-25
1623487584018.1
[]
docs.customerly.help
Getting started with Kodein-DI Kodein-DI is a Dependency Injection library. It allows you to bind your business unit interfaces with their implementation and thus having each business unit being independent. Choose & Install Flavour. Install With Maven Add the JCenter repository: <repositories> <repository> <id>jcenter</id> <url></url> </repository> </repositories> Then add the dependency: <dependencies> <dependency> <groupId>org.kodein.di</groupId> <artifactId>kodein-di-jvm</artifactId> <version>7.0.0</version> </dependency> </dependencies> With Gradle Add the JCenter repository: buildscript { repositories { jcenter() } } Then add the dependency: Using Gradle 6+ dependencies { implementation 'org.kodein.di:kodein-di:7.0.0' } Using Gradle 5.x dependencies { implementation("org.kodein.di:kodein-di:7.0.0") } You need to activate the preview feature GRADLE_METADATA in your .setings.gradle.kts file. enableFeaturePreview("GRADLE_METADATA") Using Gradle 4.x dependencies { implementation("org.kodein.di:kodein-di-jvm:7.0.0") } Bindings Definition In DI, each business unit will have dependencies. Those dependencies should (nearly almost) always be interfaces. This allows: Loose coupling: the business unit knows what it needs, not how those needs are fulfilled. Unit testing: You can unit test the business unit by mocking its dependencies. Separation: Different people can work on different units / dependencies. Each business unit and dependency need to be managed. Some dependencies need to be created on demand, while other will need to exist only once. For example, a Random object may need to be re-created every time one is needed, while a Database object should exist only once in the application. Have a look at these two sentences: "I want to bind the Randomtype to a provider that creates a SecureRandomimplementation. "I want to bind the Databasetype to a singleton that contains a SQLiteDatabaseimplementation. In DI, you bind a type (often an interface) to a binding that manages an implementation. A binding is responsible for returning the implementation when asked. In this example, we have seen two different bindings: The provider always returns a new implementation instance. The singleton creates only one implementation instance, and always returns that same instance. Declaration In Kodein-DI, bindings are declared in a DI Block. The syntax is quite simple: val kodein = DI { bind<Random>() with provider { SecureRandom() } bind<Database>() with singleton { SQLiteDatabase() } } As you can see, Kodein-DI offers a DSL (Domain Specific Language) that allows to very easily declare a binding. Kodein-DI offers many bindings that can manage implementations: provider, singleton, factory, multiton, instance, and more, which you can read about in the bindings section of the core documentation. Most of the time, the type of the interface of the dependency is enough to identify it. There is only one Database in the application, so if I’m asking for a Database, there is no question of which Database I need: I need the database. Same goes for Random. There is only one Random implementation that I am going to use. If I am asking for a Random implementation, I always want the same type of random: SecureRandom. There are times, however, where the type of the dependency is not enough to identify it. For example, you may have two Database in a mobile application: one being local, and another being a proxy to a distant Database. For cases like this, Kodein-DI allows you to "tag" a binding: add an additional information to tag it. val kodein = DI { bind<Database>(tag = "local") with singleton { SQLiteDatabase() } bind<Database>(tag = "remote") with provider {<Database>(tag = "local") with singleton { SQLiteDatabase() } bind<Database>(tag = "remote") with provider { supports two different methods to allow a business unit to access its dependencies: injection and retrieval. When dependencies are injected, the class is provided its dependencies at construction. When dependencies are retrieved, the class is responsible for getting its own dependencies. Dependency injection is more pure in the sense that an injected class will have its dependency passed at construction and therefore not know anything about? If you are building a library that will be used in multiple architecture, you probably do. If you are building an application, you probably don’t. Injection If you want your class to be injected, then you need to declare its dependencies at construction: class Presenter(private val db: Database, private val rnd: Random) { } Now you need to be able to create a new instance of this Presenter class. With Kodein has you covered. Head to the Retrieval: Direct section of the core documentation. Transitive dependencies Let’s say we want to declare the Provider in a binding. It has its own dependencies. Dependencies of dependencies are transitive dependencies. Handling those dependencies is actually very easy. If you are using injection, you can pass the argument the exact same way: val di = DI { bind<Presenter>() with singleton { Presenter(instance(), instance()) } } If you are using retrieval, simply pass the di property: val di = DI { bind<Presenter>() with singleton {.
https://docs.kodein.org/kodein-di/7.0/getting-started.html
2021-06-12T14:25:25
CC-MAIN-2021-25
1623487584018.1
[]
docs.kodein.org
Software and Hardware Requirements Contributors Download PDF of this page Network Configuration The following is the network configuration requirement for setting up in the cloud: The Iguazio cluster and NetApp Cloud Volumes must be in the same virtual private cloud. The cloud manager must have access to port 6443 on the Iguazio app nodes. We used Amazon Web Services in this technical report. However, users have the option of deploying the solution in any Cloud provider.For on-premises testing in ONTAP AI with NVIDIA DGX-1, we used the Iguazio hosted DNS service for convenience. Clients must be able to access dynamically created DNS domains. Customers can use their own DNS if desired. Hardware Requirements You can install Iguazio on-premises in your own cluster. We have verified the solution in NetApp ONTAP AI with an NVIDIA DGX-1 system. The following table lists the hardware used to test this solution. The following table lists the software components required for on-premise testing: This solution was fully tested with Iguazio version 2.5 and NetApp Cloud Volumes ONTAP for AWS. The Iguazio cluster and NetApp software are both running on AWS.
https://docs.netapp.com/us-en/netapp-solutions/ai/mlrun_software_and_hardware_requirements.html
2021-06-12T13:35:30
CC-MAIN-2021-25
1623487584018.1
[]
docs.netapp.com
nss.squash.root This configuration parameter specifies whether you want to force root and wheel super-user accounts to be defined locally. If you set this parameter to true, Active Directory users with a UID of 0, a GID of 0, a user or group name of root, or a group name of wheel are not permitted to log on. Because the agent cannot prevent Active Directory users or groups from being assigned a UID or GID of 0, which would give those users or groups root-level access to the computers in a zone, you can use this parameter to prevent any Active Directory users with a UID or GID of 0 from logging on. Setting this parameter to true forces the privileged accounts to be defined as local accounts and not authenticated through Active Directory. For example: nss.squash.root: true If you set this parameter to false, you should use other configuration parameters, such as pam.ignore.users or user.ignore to skip Active Directory authentication for system accounts so that Active Directory users cannot be granted root access on the computers in the zones they are permitted to access. The default value for this parameter is true. It is possible, however, for an Active Directory administrator to override this setting through the use of group policy applied to a local computer, for example, by using the Sudo rights group policy. There is no way to effectively prevent the setting from being changed, except by disabling computer-based group policies in the local centrifydc.conf file or by strictly controlling who has permission to enable and apply group policies to computers that join an Active Directory domain. For information about disabling group policies using parameters in the local centrifydc.conf file, see gp.disable.all or gp.disable.machine in Customizing group policy configuration parameters
https://docs.centrify.com/Content/config-unix/nss_squash_root.htm
2021-06-12T14:33:12
CC-MAIN-2021-25
1623487584018.1
[]
docs.centrify.com
8.1 Scribble Abbrevs Helpers for making Scribble documents. The scribble-abbrevs module provides all the bindings documented on this page. 1 General Scribble Utilities General utilities, for any Scribble document. Similar to number->string, but adds commas to numbers with more than three digits. Examples: Renders a sequence of author names (with Oxford comma). Examples: Always use the Oxford comma. Remember the Maine truck drivers! (settlement) Renders the given content in "sfstyle" (serif-style). Renders the given strings as a paragraph title. Renders a clickable URL. Strips prefixes like "www." and "http://". Returns the English word for the given integer. Examples: The current implementation fails unless (abs i) is less than 1 quadrillion. Alias for (integer->word i #:title? #true). Wikipedia: roman numeral Converts a positive number to a roman numeral in the standard subtractive form. Examples: Converts a roman numeral to a natural number. Accepts subtractive or additive numbers, and the strings "nulla" and "N". Examples: Convert a positive number to a sequence of roman symbols. Example: Predicate for symbols that correspond to a roman numeral value. Examples: Each identifier id renders like the string "id", except that it might be prettier (avoid bad line breaks, bad spacing). 2 LaTeX Renderer Utilities Utilities for Scribble code that generates LaTeX output. Typesets \appendix in a paragraph with the 'pretitle style. In LaTeX, this marks the current "section" as the start of an appendix. Renders the given content in small caps style. Renders the given strings as-is in the output. Example: Renders an un-numbered definition for a technical term. Example: This usually looks good to me. Renders the section number for tag prefixed with the word "Section" (respectively, "section"). These functions assume that the following LaTeX command appears somewhere between the definition of Scribble’s SecRef (see Base Latex Macros) and the first occurrence of section-ref: 3 Documentation Renderer Utilities Utilities for Scribble code that generates Racket documentation. Similar to tech, but links to The Racket Guide. Similar to tech, but links to The Racket Reference. Typesets the contents of the given file as if its contents were wrapped in a racketblock. 4 Pict Utilities Examples: Adds a thin black border around the given pict.
https://docs.racket-lang.org/scribble-abbrevs/index.html
2021-06-12T14:20:25
CC-MAIN-2021-25
1623487584018.1
[]
docs.racket-lang.org
The Network Time Protocol (NTP) is a method to synchronize clocks to UTC (timezones are set locally by the administrator). The general goal of this software is to ensure that time is monotonically increasing, e.g. NTP will not skip a clock backwards in time and only makes adjustments that slow or speed up the local clock to move it towards the true definition of time. NTP will minimize offset (the difference from true time) and skew (difference of time change rate from the true rate), as it operates on a host. NTP is required to be running on perfSONAR servers that are performing OWAMP measurements. OWAMP is designed to make API calls to a running NTP daemon to determine the time and relative errors for hosts involved in a measurement. As an example, OWAMP works by sending streams of small UDP packets. Each is timestamped on one end, and then compared on the other end upon receipt. These accurate timestamps are used to calculate latency and jitter on a more fine level than is possible via other methods (ICMP packets used in Ping). It is possible to operate the perfSONAR tools without a running NTP daemon (e.g. by using certain switches in the tools to disable the check of time), however the resulting measurements of network performance will be skewed due to the lack of accurate timekeeping. If NTP is being configured manually (e.g. editing /etc/ntp.conf), there are several key points to be aware of to ensure that time is as accurate as possible on the host: - Not from the NTP pool. The default configuration for NTP is to use regional pool servers (e.g. some located in North America, or Europe, etc.). Pool servers work well if there is not a critical need for time accuracy in the range of a few milliseconds. For measurement needs, accessing time consistently, from well known and trusted servers, is a requirement. - Located topologically close to the server. In general they should be no more than 20ms away. Selecting a server that is far away makes the time subject to the increased latency, adding to a higher possible error. - Located on divergent network paths. The reasoning for this requirement is to prevent a catastrophic network failure from impacting all time servers that are synchronized against. For example, if 4 servers are selected, all located and operated by a peer network, and this peer suffered a network outage, time updates may not be available. - Of the same ‘stratum’. Stratum is defined to be the distance away from a true time source. For example, if a host has a CDMA clock attached to it, it is a stratum 1 server (the CDMA itself is stratum 0). Setting server choices to be all of the same stratum will aide the NTP algorithm selection process. In general try to synchronize against clocks that are stratum 1, 2, or 3. Higher stratum servers can impart additional error into measurement calculations. Once NTP is configured, it will take a day to fully stabilize a clock. This process happens quickly at first (e.g. sending a set of small synchronization packets every 60 seconds), and then slows down by querying on the order of minutes. Clock synchronization packets are UDP, and typically use port 123. NTP can be queried on a machine using the following command: [user@host ~]$ /usr/sbin/ntpq -p -c rv remote refid st t when poll reach delay offset jitter ============================================================================== *GPS_PALISADE(1) .CDMA. 0 l 13 16 377 0.000 0.007 0.000 +albq-owamp-v4.e 198.128.2.10 2 u 25 64 377 54.065 0.031 0.010 atla-owamp.es.n 198.124.252.126 2 u 26 64 377 13.063 0.085 0.015 +sunn-owamp.es.n 198.129.252.106 2 u 15 64 377 62.270 -0.276 0.011 aofa-owamp.es.n 198.124.252.126 2 u 19 64 377 5.216 0.103 0.043 star-owamp.es.n 198.124.252.126 2 u 42 64 377 17.447 0.945 0.054 associd=0 status=0415 leap_none, sync_uhf_radio, 1 event, clock_sync, version="ntpd [email protected] Sat Nov 23 18:21:48 UTC 2013 (1)", processor="x86_64", system="Linux/2.6.32-504.1.3.el6.x86_64", leap=00, stratum=1, precision=-22, rootdelay=0.000, rootdisp=0.469, refid=CDMA, reftime=d82c943a.40804c37 Fri, Dec 5 2014 12:29:46.251, clock=d82c9448.0851cd31 Fri, Dec 5 2014 12:30:00.032, peer=22846, tc=5, mintc=3, offset=0.001, frequency=-56.648, sys_jitter=0.042, clk_jitter=0.000, clk_wander=0.000 Additional statistics can be found using this command: [user@host ~]$ ntpstat synchronised to NTP server (198.124.252.126) at stratum 2 time correct to within 38 ms polling server every 1024 s Even on well tuned servers, time abnormalities can be witnessed due to the sensitivity of tools like OWAMP. NTP works hard to get your host to within a couple of milliseconds of true time. When measuring latency between hosts that are very topologically close (e.g. LAN distances up to 5 milliseconds), it is quiet possible that NTP drift will be observed over time. The following figure shows that calculated latency can drift on the order of 1/2 a millisecond frequently as the system clocks are updated by NTP in the background. The next figure shows a closer view of this behavior. Clocks will drift slowly between the intervals that NTP adjusts them, particularly if NTP has stabilized and is running every couple of minutes instead of a more frequent pace. Hosts that are further away will not see this behavior, as the difference of a fractional millisecond is less important when the latency is 10s or 100s of milliseconds. There have been reports that perfSONAR produces inaccurate or impossible (negative) results when testing transit time on networks whose latency is in the sub-millisecond range (i.e., less than the clock accuracy provided by the Network Time Protocol served by hosts on the Internet). This is expected, although not necessarily desirable, behavior. It has further been suggested that perfSONAR should integrate support for the IEEE 1588 Precision Time Protocol (PTP), which can discipline a computer’s clock to within tens of microseconds, eliminating this problem. While the perfSONAR development team’s goal is to provide the most accurate measurements in as many situations as possible, there are two factors that lead us to question whether PTP support is a project we should undertake. First is the benefit to perfSONAR’s primary mission, which is identifying network problems along paths between domains. The latency in those paths tends to be large enough that NTP’s millisecond accuracy is sufficient for most testing. Installations requiring better have the option of installing local stratum-1 NTP servers, some of which can be had for under US$500 and have been known to discipline clients’ clocks to well under a quarter millisecond. Second is the deployment cost of PTP, which is currently very high. A campus wanting to use it would require at least two grandmaster clocks (US$2,500+ for basic models) and every router and switch between the grandmasters and perfSONAR nodes would have to be capable of functioning as a boundary or transparent clock. This feature is usually found in switches designed for use in low-latency applications, not the workgroup switches that end up in wiring closets. The least-expensive, PTP-capable switch we are able to identify is the Cisco Nexus 3048TP (48 1GbE ports plus four 10 Gb SFP+) at US$5,000. Using this equipment to upgrade a 25-switch installation would put the capital cost at about US$130,000 for just the infrastructure, or US$1,300 per perfSONAR node in a 100-node installation. In addition, all systems running perfSONAR would also require NICs with hardware support for the timestamping that makes PTP work accurately. While software-only PTP clients exist, they may suffer inaccuracies induced by the vagaries of running under a general-purpose operating system and provide inaccurate results when testing in a LAN environment. Because of these two things, support for PTP is not currently on perfSONAR’s development roadmap. That said, the team welcomes user feedback and uses it to measure demand for new features and how much priority they should be given. If a large enough contingent of users is deploying PTP in their networks and believes the additional accuracy would be useful, it will be considered for a future release.
http://docs.perfsonar.net/ntp_overview.html
2018-10-15T12:23:39
CC-MAIN-2018-43
1539583509196.33
[]
docs.perfsonar.net
Known Issues¶ This page summarizes current known issues and imperfections to be on the lookout for. Many of these may be improved in future versions of this software. As of version 1.1: - The location of spectra on the IFS shift due to flexure inside the spectrograph as its orientation with respect to gravity changes. This includes both a fairly repeatable elevation-dependent component (generally less than 1 pixel in size) and occasional irregular “jumps” of up to perhaps 2 pixels. The data pipeline does not yet robustly automatically determine these shifts, and some manual adjustment of spectral positions is often necessary. See this page for more details. - Flat fielding is not yet handled well (or at all, depending on which recipe is selected). This is because of the difficulty in separating contributions from the lenslet array and detector flat fields, since there is no way to illuminate the detector itself with flat illumination. Improved algorithms to separate these components are under development. In the mean time, tests indicate that the effect of neglecting flat field corrections entirely is generally a < 5% (1 sigma) photometric uncertainty. More details can be found here and here. - Instrumental polarization from the telescope and GPI’s own optics is not yet characterized in detail, but appears to be small (no more than a few percent). - Spectral and polarimetric datacube assembly use fairly simple (but very robust) apertures, a 3x1 pixel box in spectral mode and a 5x5 box in polarization mode. We expect to produce datacubes with reduced systematics via “second generation” algorithms making use of high resolution microlens PSFs, which are currently in development. - Wavelength calibration in each filter is treated independently, with separate linear wavelength solutions fit to each lenslet in each band. This provides calibration accuracy better than 1% after flexure has been compensated; but we can probably eventually do even better via a polynomial wavelength solution simultaneously fit across the entire wavelength range Y-K2.
http://docs.planetimager.org/pipeline/installation/known_issues.html
2018-10-15T13:59:48
CC-MAIN-2018-43
1539583509196.33
[]
docs.planetimager.org
Workforce metrics Power BI content This topic describes the Workforce metrics Microsoft Power BI content. It explains how to access the Power BI reports, and provides information about the data model and entities that were used to build the content. Accessing the Power BI content The Workforce metrics Power BI content appears in the Personnel management workspace if you use one of these products: - Microsoft Dynamics 365 for Finance and Operations - Microsoft Dynamics 365 for Talent Metrics that are included in the Power BI content The following table lists the metrics that are shown on each report. You can filter the charts and tiles on these reports, and pin the charts and tiles to the dashboard. For more information about how to filter and pin in Power BI, see Create and configure a dashboard. Be sure to download the Workforce metrics Power BI content that applies to the version of Microsoft Dynamics 365 that you're using. Note The .pbix files available in Lifecycle Services apply to Finance and Operations only. Understanding the data model and entities The following table shows the entities that the content was based on.
https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/analytics/workforce-analysis-power-bi-content-pack
2018-10-15T12:44:12
CC-MAIN-2018-43
1539583509196.33
[]
docs.microsoft.com
Walkthrough¶ In this walkthrough we go over the cifar10_cnn.py example in the examples directory. This example explains the basics of using brainstorm and helps you get started with your own project. Prior to running this example, you will need to prepare the CIFAR-10 dataset for brainstorm. This can be done by running the create_cifar10.py script in the data directory. A detailed description of how to prepare your data for brainstorm can be found in Data Format. We start the cifar10_cnn.py example by importing the essential features we will need later: from __future__ import division, print_function, unicode_literals import os import h5py import brainstorm as bs from brainstorm.data_iterators import Minibatches from brainstorm.handlers import PyCudaHandler from brainstorm.initializers import Gaussian Next we set the seed for the global random number generator in Brainstorm. By doing so we are sure that our experiment is reproducible. bs.global_rnd.set_seed(42) Let’s now load the CIFAR-10 dataset from the HDF5 file, which we prepared earlier. Next we create a Minibatches iterator for the training set and validation set. Here we specify that we want to use a batch size of 100, and that the image data and targets should be named ‘default’ and ‘targets’ respectively. data_dir = os.environ.get('BRAINSTORM_DATA_DIR', '../data') data_file = os.path.join(data_dir, 'CIFAR-10.hdf5') ds = h5py.File(data_file, 'r')['normalized_split'] getter_tr = Minibatches(100, default=ds['training']['default'][:], targets=ds['training']['targets'][:]) getter_va = Minibatches(100, default=ds['validation']['default'][:], targets=ds['validation']['targets'][:]) In the next step we use a simple helper tool to create two important layers. The first layer is an Input layer which takes external inputs named ‘default’ and ‘targets’ (these names are the default names used by this tool and can be altered by specifying different names). Every layer in brainstorm has a name, and by default this layer will simply be named ‘Input’. The second layer is a fully-connected layer which produces 10 outputs, and is assigned the name ‘Output_projection’ by default. In the background, a SoftmaxCE layer (named ‘Output’ by default) is added, which will apply the softmax function and compute the appropriate cross-entropy loss using the targets. At the same time this loss is wired to a Loss layer, which marks that this is a value to be minimized. inp, fc = bs.tools.get_in_out_layers('classification', (32, 32, 3), 10) In brainstorm we can wire up our network by using the >> operator. The layer syntaxes below should be self-explanatory. Any layer connected to other layers can now be passed to from_layer to create a new network. Note that each layer is assigned a name, which will be used later. (inp >> bs.layers.Convolution2D(32, kernel_size=(5, 5), padding=2, name='Conv1') >> bs.layers.Pooling2D(type="max", kernel_size=(3, 3), stride=(2, 2)) >> bs.layers.Convolution2D(32, kernel_size=(5, 5), padding=2, name='Conv2') >> bs.layers.Pooling2D(type="max", kernel_size=(3, 3), stride=(2, 2)) >> bs.layers.Convolution2D(64, kernel_size=(5, 5), padding=2, name='Conv3') >> bs.layers.Pooling2D(type="max", kernel_size=(3, 3), stride=(2, 2)) >> bs.layers.FullyConnected(64, name='FC') >> fc) network = bs.Network.from_layer(fc) We would like to use CUDA to speed up our network training, so we simply set the network’s handler to be the PyCudaHandler. This line is not needed if we do not have, or do not want to use the GPU – the default handler is the NumpyHandler. network.set_handler(PyCudaHandler()) In the next line we initialize the weights of our network with a simple dictionary, using the names that were assigned to the layers before. Note that we can use wildcards here! We specify that: - For each layer name beginning with ‘Conv’, the ‘W’ parameter should be initialized using a Gaussian distribution with std. dev. 0.01, and the ‘bias’ parameter should be set to zero. - The parameter ‘W’ of the layers named ‘FC’ and ‘Output_projection’ should be initialized using a Gaussian distribution with std. dev. 0.1. The ‘bias’ parameter of these layers should be set to zero. Note that ‘Output_projection’ is the default name of the final layer created by the helper over which the softmax is computed. network.initialize({'Conv*': {'W': Gaussian(0.01), 'bias': 0}, 'FC': {'W': Gaussian(0.1), 'bias': 0}, 'Output_projection': {'W': Gaussian(0.1), 'bias': 0}}) Next we create the trainer for which we specify that we would like to use stochastic gradient descent (SGD) with momentum. Additionally we add a hook to the trainer, which will produce a progress bar during each epoch, to keep track of training. trainer = bs.Trainer(bs.training.MomentumStepper(learning_rate=0.01, momentum=0.9)) trainer.add_hook(bs.hooks.ProgressBar()) We would like to check the accuracy of the network on our validation set after each epoch. In order to do so we will make use of a hook. The SoftmaxCE layer named ‘Output’ produces an output named ‘probabilities’ (the other output it produces is named ‘loss’). We tell the Accuracy scorer that this output should be used for computing the accuracy using the dotted notation <layer_name>.<view_type>.<view_name>. Next we set the scorers in the trainer and create a MonitorScores hook. Here we specify that the trainer will provide access to a data iterator named ‘valid_getter’, as well as the scorers which will make use of this data. scorers = [bs.scorers.Accuracy(out_name='Output.outputs.probabilities')] trainer.train_scorers = scorers trainer.add_hook(bs.hooks.MonitorScores('valid_getter', scorers, name='validation')) Additionally we would like to save the network every time the validation accuracy improves, so we add a hook for this too. We tell the hook that another hook named ‘validation’ is logging something called ‘Accuracy’ and that the network should be saved whenever this value is at its maximum. trainer.add_hook(bs.hooks.SaveBestNetwork('validation.Accuracy', filename='cifar10_cnn_best.hdf5', name='best weights', criterion='max')) Finally, we add a hook to stop training after 20 epochs. trainer.add_hook(bs.hooks.StopAfterEpoch(20)) We are now ready to train! We provide the trainer with the network to train, the training data iterator, and the validation data iterator (to be used by the hook for monitoring the validation accuracy). trainer.train(network, getter_tr, valid_getter=getter_va) All quantities logged by the hooks are collected by the trainer, which we can examine post training. print("Best validation accuracy:", max(trainer.logs["validation"]["Accuracy"]))
https://brainstorm.readthedocs.io/en/latest/walkthrough.html
2018-10-15T13:21:52
CC-MAIN-2018-43
1539583509196.33
[]
brainstorm.readthedocs.io
Start an AWS CodeCommit Pipeline Automatically Using a CloudWatch Events Rule You can use Amazon CloudWatch Events to trigger pipelines to start automatically when rule or schedule criteria are met. For pipelines with an Amazon S3 or AWS CodeCommit source, an Amazon CloudWatch Events rule detects source changes and then starts your pipeline. When you use the console to create or change a pipeline, the rule and all associated resources are created for you. If you create or change an Amazon S3 or AWS CodeCommit pipeline in the CLI or AWS CloudFormation, you must use these steps to create the Amazon CloudWatch Events rule and all associated resources manually. In Amazon CloudWatch Events, you create a rule to detect and react to changes in the state of the pipeline's defined source. To create the rule Create an Amazon CloudWatch Events rule that uses the pipeline's source repository as the event source. Add AWS CodePipeline as the target. Grant Amazon CloudWatch Events permissions to start the pipeline. As you build your rule, the Event Pattern Preview pane in the console (or the --event-pattern output in the AWS CLI) displays the event fields, in JSON format. The following sample AWS CodeCommit event pattern uses this JSON structure: { "source": [ "aws.codecommit" ], "detail-type": [ "CodeCommit Repository State Change" ], "resources": [ "CodeCommitRepo_ARN" ], "detail": { "event": [ "referenceCreated", "referenceUpdated"], "referenceType":["branch"], "referenceName": ["branch_name"] } } The following is a sample AWS CodeCommit event pattern in the Event window for a "MyTestRepo" repository with a branch named "master:" { "source": [ "aws.codecommit" ], "detail-type": [ "CodeCommit Repository State Change" ], "resources": [ "arn:aws:codecommit:us-west-2:80398EXAMPLE:MyTestRepo" ], "detail": { "referenceType": [ "branch" ], "referenceName": [ "master" ] } } The event pattern uses these fields: source:should contain aws.codecommitas the event source. detail-type:displays the available event type ( CodeCommit Repository State Change). resources:contains the repository ARN. detail:contains the repository branch information referenceTypeand referenceName. Topics - Create a CloudWatch Events Rule That Starts Your AWS CodeCommit Pipeline (Console) - Create a CloudWatch Events Rule That Starts Your AWS CodeCommit Pipeline (CLI) - Create a CloudWatch Events Rule That Starts Your AWS CodeCommit Pipeline (CFN TEMPLATE) - Configure Your AWS CodeCommit Pipelines to Use Amazon CloudWatch Events for Change Detection
https://docs.aws.amazon.com/codepipeline/latest/userguide/triggering.html
2018-10-15T13:53:36
CC-MAIN-2018-43
1539583509196.33
[]
docs.aws.amazon.com