content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Table of Contents
Product Index
Dreaming of the perfect backdrop for your Victorian scenes? The Pavilion of Montchanin is a Victorian styled free-standing prop. The set is a packed with authentic details, high resolution textures, and grouping to allow the elements (such as the base) to be hidden. Steps, vases, arches and beautiful stonework make this a perfect. | http://docs.daz3d.com/doku.php/public/read_me/index/6717/start | 2020-09-18T11:57:14 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.daz3d.com |
ATTENTION!!
This is meant to be a guided, but very open discussion. Please feel free to jump in at any time with questions or thoughts on things!
This page presents a broad-level overview of amplicon sequencing and metagenomics as applied to microbial ecology. Both of these methods are most often applied for exploration and hypothesis generation and should be thought of as steps in the process of science rather than end-points – like all tools of science 🙂
Amplicon sequencing
Amplicon sequencing of marker-genes (e.g. 16S, 18S, ITS) involves using specific primers that target a specific gene or gene fragment. It is one of the first tools in the microbial ecologist’s toolkit. It is most often used as a broad-level survey of community composition used to generate hypotheses based on differences between recovered gene-copy numbers between samples.
Metagenomics
Shotgun metagenomic sequencing aims to amplify all the accessible DNA of a mixed community. It uses random primers and therefore suffers much less from pcr bias (discussed below). Metagenomics enables profiling of taxonomy and functional potential. Recently, the recovery of representative genomes from metagenomes has become a very powerful approach in microbial ecology, drastically expanding the known Tree of Life by granting us genomic access to as-yet unculturable microbial populations (e.g. Hug et al. 2016; Parks et al. 2017).
Here we’ll discuss some of the things each is useful and not useful for, and then look at some general workflows for each.
As noted above, amplicon data can still be very useful. Most often when people claim it isn’t, they are assessing that based on things it’s not supposed to do anyway, e.g.:
“Why are you doing 16S sequencing? That doesn’t tell you anything about function.”
“Why are you measuring nitrogen-fixation rates? That doesn’t tell you anything about the proteins that are doing it.”
We shouldn’t assess the utility of a tool based on something it’s not supposed to do anyway 🙂
QUICK QUESTION!
With all that said, do you think we should expect relative abundance information from amplicon sequencing to match up with relative abundance from metagenomic sequencing?Solution
No, and that’s not a problem if we understand that neither are meant to tell us a true abundance anyway. They are providing different information is all. And the relative abundance metrics they do provide can still be informative when comparing multiple samples generated the same way 🙂
All sequencing technologies make mistakes, and (to a much lesser extent), polymerases make mistakes as well during the amplification process. These mistakes artificially increase the number of unique sequences in a sample, a lot. Clustering similar sequences together (generating OTUs) emerged as one way to mitigate these errors and to summarize data – though at the cost of resolution. The field as a whole is moving towards using solely ASVs, and there is pretty good reasoning for this. This Callahan et al. 2017 paper nicely lays out the case for that, summarized in the following points:
If you happen to work with amplicon data, I highly recommend digging into the Callahan et al. 2017 paper sometime 🙂 | https://angus.readthedocs.io/en/stable/amplicon_and_metagen.html | 2020-09-18T10:22:10 | CC-MAIN-2020-40 | 1600400187390.18 | [] | angus.readthedocs.io |
Overview
There is a basic entity for each type of movement (land, sea, and air). Each entity type can be tweaked so that it is possible to simulate any kind of drivable machine.
The vehicle entity tool can be found in the Rollup Bar > Entity > Vehicles.
Having different types of vehicles makes levels more fun to play as they provide new ways of navigating terrain and allow the use of vehicle weapons game play.
Entity Properties2
Entity Properties
Overview
Content Tools | https://docs.cryengine.com/display/SDKDOC2/Vehicle+Entities | 2020-09-18T11:13:15 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.cryengine.com |
Enable Continuous Efficiency for Kubernetes
Harness CE monitors cloud costs using your Kubernetes clusters, namespaces, nodes, workloads, and labels. This topic describes how to enable Continuous Efficiency (CE) for Kubernetes.
CE is integrated with a Kubernetes cluster by using the Harness Kubernetes Delegate installed in the cluster, and the Harness Kubernetes Cloud Provider that uses the Delegate for authentication.
Each Kubernetes cluster you want to monitor must have a Harness Delegate and Cloud Provider associated with it. CE cannot monitor multiple clusters using a single Kubernetes Delegate and Kubernetes Cluster Cloud Provider.
To enable the Harness Kubernetes Delegate to monitor your cluster costs, the Service account you used to install and run the Harness Kubernetes Delegate will be granted a special ClusterRole to access resource metrics. This process is described in this topic.
In this topic:
Before You Begin
Prerequisites
- Harness Kubernetes Delegate and Kubernetes Cluster Cloud Provider — This topic assumes that you have a Harness Kubernetes Delegate installed in your Kubernetes cluster, and a Harness Kubernetes Cluster Cloud Provider set up to use that Kubernetes Delegate for authentication. For information on setting up and Kubernetes Delegate, see Kubernetes Quickstart and Connect to Your Target Kubernetes Platform.
- Before enabling CE for Kubernetes, you must ensure the utilization data for pods and nodes is available. To do so, perform the following steps:
Step 1: Install Kubernetes Metrics Server
Metrics Server must be running on the Kubernetes cluster where your Harness Kubernetes Delegate is installed.
- Metrics Server is a cluster-wide aggregator of resource usage data. It collects resource metrics from kubelets and exposes them in Kubernetes API server through Metrics API. For more information, see Installing the Kubernetes Metrics Server from AWS.
To install metrics server on your EKS clusters, run the following command:
kubectl apply -f
Step 2: Access Metrics API
Bind the cluster-admin ClusterRole to a user account. Next, you will use this user account to create a ClusterRole and bind it to the Service account used by the Delegate.
- Bind a user account to the user in cluster-admin ClusterRole. You will use this user account to create a ClusterRole and bind it to the Harness Kubernetes Delegate Service account later.
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user <[email protected]>
- Obtain the Service account name and namespace used by the Harness Kubernetes Delegate. By default, when you installed the Kubernetes Delegate, the following were used:
name: defaultIf you have changed these, obtain the new name and namespace.
namespace: harness-delegate
- Download the ce-default-k8s-cluster-role.yaml file from Harness.
The
Subjectssection of the ClusterRoleBinding is configured with the default Delegate Service account name (
default) and namespace (
harness-delegate).
If you have changed these defaults, update the ce-default-k8s-cluster-role.yaml file before running it.
- Once you have downloaded the file, connect to your Kubernetes cluster and run the following command in your Kubernetes cluster:
kubectl apply -f ce-default-k8s-cluster-role.yaml
- Verify that you have all the required permissions for the Service account using the following commands:
kubectl auth can-i watch pods
--as=system:serviceaccount:<your-namespace>:<your-service-account>
--all-namespaces
kubectl auth can-i watch nodes
--as=system:serviceaccount:<your-namespace>:<your-service-account>
--all-namespaces
kubectl auth can-i get nodemetricsHere is an example showing the commands and output using the default Delegate Service account name and namespace:
--as=system:serviceaccount:<your-namespace>:<your-service-account>
--all-namespaces
kubectl auth can-i get podmetrics
--as=system:serviceaccount:<your-namespace>:<your-service-account>
--all-namespaces
$ kubectl auth can-i watch pods --as=system:serviceaccount:harness-delegate:default --all-namespaces
yes
$ kubectl auth can-i watch nodes --as=system:serviceaccount:harness-delegate:default --all-namespaces
yes
$ kubectl auth can-i watch nodemetrics --as=system:serviceaccount:harness-delegate:default --all-namespaces
yes
$ kubectl auth can-i watch podmetrics --as=system:serviceaccount:harness-delegate:default --all-namespaces
yes
Step: Enable Continuous Efficiency
To enable CE in your cloud environment, you simply need to enable it on the Harness Kubernetes Cloud Provider that connects to your target cluster.
- In Continuous Efficiency, click Setup.
- In Cloud Cost Setup, select the Kubernetes Cloud Provider for which you want to enable Continuous Efficiency.
- In Display Name, enter the name that will appear in CE Explorer to identify this cluster. Typically, this is the cluster name.
- In Cluster Details, select:
- Inherit from selected Delegate: (Recommended) Select this option if the Kubernetes cluster is the same cluster where the Harness delegate was installed.
- Delegate Name: Select the Delegate. For information on adding Selectors to Delegates, see Delegate Installation.
- Enter manually: In this option, the Cloud Provider uses the credentials that you enter manually. The Delegate uses these credentials to send deployment tasks to the cluster. The Delegate can be outside or within the target cluster.
- Master Url: The Kubernetes master node URL. The easiest method to obtain the master URL is using kubectl:
kubectl cluster-info
- Click Next.
- Select the checkbox Enable Continuous Efficiency and click Submit.
The Kubernetes Cloud Provider is now listed under Efficiency Enabled.
As noted earlier, after enabling CE, it takes about 24 hours for the data to be available for viewing and analysis.
Once CE has data, the cluster is listed in Cost Explorer. The cluster is identified by the Display Name you used in the Kubernetes Cloud Provider.
Troubleshooting
- If the Cloud Provider listed in Setup is listed with the following error message, you need to review the steps earlier in this topic.
No Delegate has all the requisites to access the cluster <cluster-name>.
- If the Cloud Provider listed in Setup is listed with the following Invalid request error message, you need to download the ce-default-k8s-cluster-role.yaml file from Harness again. The
Subjectssection of the ClusterRoleBinding is configured with the default Delegate Service account name (
default) and namespace (
harness-delegate). If you have changed these defaults, update the
ce-default-k8s-cluster-role.yamlfile before running it. See Step 2: Access Metrics API. | https://docs.harness.io/article/kuiuc6x257-enable-continuous-efficiency-for-kubernetes | 2020-09-18T10:17:25 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.harness.io |
Artifact CLI Reference#
Note: Using Artifacts during the beta period is free. Once the artifacts system is in the general availability, additional charges will apply based on the usage.
Every project on Semaphore has access to three levels of the artifact store: project, workflow and job. Based on this level, you can retrieve a specific artifact in the job environment and through the web interface. You can read more about suggested use cases here.
The
artifact command line interface (CLI), is a tool that helps you manage
deliverables created during the CI/CD process of your project on Semaphore.
Currently it is available in Linux and Docker environments on Semaphore.
The general interface of the
artifact utility is:
artifact [COMMAND] [STORE LEVEL] [PATH] [flags]
[COMMAND]- action to be performed for an artifact (
push,
pullor
yank)
[STORE LEVEL]- level on which specific artifact is available within the artifact store (
project,
workflow,
job)
[PATH]- points to the artifact (e.g. file or directory)
[flags]- optional command line flags (e.g.
--force,
--destination)
Artifacts Management#
Uploading Artifact#
To upload an artifact from Semaphore job it is necessary to specify
the artifact store level and point to a file or directory
with the
artifact push command:
artifact push project my-artifact-v3.tar
Available flags:
--destination(
-d) - Used to adjust artifact name within the artifact store. Later, on you can use this name with
artifact pullto download the artifact to a Semaphore job. Example:
artifact push project my-artifact.tar --destination releases/my-artifact-v3.tar
--expire-in(
-e) - Used to set the artifact expiration time (Nd, Nw, Nm, Ny). For example, you'd probably want to delete uploaded debugging log in a week or so (
artifact push job debugging.log --expire-in 1w).
--force(
-f) - By default, every
artifact pushcommand doesn't upload an artifact if it is already available in the store. You can use this option to overwrite existing file.
Downloading Artifact#
Similarly, use
artifact pull to download an artifact to Semaphore job environment.
It is necessary to specify artifact store level of the target artifact
and point to a file or directory within the store.
artifact pull project my-artifact-v3.tar
Available flags:
--destination(
-d) - This flag can be used to specify a path to which artifact is downloaded in the Semaphore job environment. Example:
artifact pull project releases/my-artifact-v3.tar --destination my-artifact.tar
--force(
-f) - Use this option to overwrite a file or directory within Semaphore job environment.
Deleting Artifact#
To remove an artifact from the specific artifact store it is necessary to specify
the store level and point to a file or directory with the
artifact yank command.
artifact yank project my-artifact-v3.tar | https://docs.semaphoreci.com/reference/artifact-cli-reference/ | 2020-09-18T11:28:10 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.semaphoreci.com |
i4designer Security
For greater security and organization learn how to manage your users and customize their permissions in the i4designer application.
Before managing the i4designer users it is important to understand how the roles and permissions work.
The i4designer Control Center provides the infrastructure necessary to control authentication and authorisation for user accounts, split up into the following sections:
Before moving into the details of the Users and Organizations Management menus, lets clarify the i4designer authorisation levels. The first level of authorisation is given by an optional user's affiliation to an Organization. The second level of authorisation is represented by a set of roles, each carrying on their own set of predefined permissions.
Hence, we can distinguish the following hierarchy:
Administrator - The Independent or Tenant Administrator user is a super-administrator having no limits, from permissions point of view.
User - The Independent User is a basic user having access only to its own data (own projects, own account).
Administrator - The Administrator user belonging to an Organization, is another super-user granted with unlimited permissions.
Organization Administrator - The Organization Administrator is a powerful user, however its permissions are limited to the Organization that it belongs to.
Organization User - The Organization User is a basic user having access only to its own data (own projects, own account).
Independent users
The users that are not affiliated to any Organization are the so called Independent users. After being authenticated, the authorization is done actively at every level of the application calculating the user's effective rights.
Organization users
The Organization is a logical grouping of users belonging to the same entity. After being authenticated, the authorization is done actively at every level of the application calculating the user's effective rights. | https://docs.webfactory-i4.com/i4designer/en/i4designer-security.html | 2020-09-18T10:16:42 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.webfactory-i4.com |
... (See the
Commands | Station typemenu option for the Type values and how to set them)). | https://docs.win-test.com/wiki/Multi-op/Status_window | 2020-09-18T10:21:05 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.win-test.com |
.), PPP (ppp.), PPPoE (pppoe.)(1)
GPIO type
Bidirectional(2)
UART limitations
Max practical baudrate ~921600
Serial port FIFOs
16 byte for TX, 16 bytes for RX
Serial port line configuration
N/A (as CPU GPIO lines are bidirectional)
Serial port interrupts and io.intenabled
Independent
RTS/CTS remapping
Not supported(3)
ADC
N/A
GA1000 lines remapping
N/A (Wi-Fi not supported)
Beep.divider calculation
N/A (buzzer not provided)
Red status (SR) LED
Yellow Ethernet status (EY).CTS is permanently mapped to 0- PL_INT_NUM_0 (0- PL_IO_NUM_0_INT0). RTS is permanently mapped to 2- PL_IO_NUM_2.
Supported Objects, variable types, and functions
•Sock — socket communications (up to 16 UDP, TCP, and HTTP sessions);
•Net — controls the Ethernet port;
•Ser — in charge of the RS232 port;
•Io — handles I/O lines, ports, and interrupts; DS1100 platform.
Platform-specific constants
You can find them here. | http://docs.tibbo.com/taiko/platform_ds1100.htm | 2018-12-10T00:27:43 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.tibbo.com |
Bill Text (PDF: )
LC Amendment Memo
SB684 ROCP for Committee on Health and Human Services On 2/9/2018 (PDF: )
SB684 ROCP for Committee on Senate Organization (PDF: )
LC Bill Hearing Materials
Wisconsin Ethics Commission information
2017 Assembly Bill 766 - A - Enacted into Law | https://docs.legis.wisconsin.gov/2017/proposals/sb684 | 2018-12-10T00:40:46 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.legis.wisconsin.gov |
Create a New Template
With the template feature, you can easily customize column templates for different situations, based on various Demographic, System, and Gradebook columns. For example, you could customize a template that only shows students' names and the Narrative and Comment columns. TeacherPlus comes preloaded with default templates that you can use or customize further as your own.
To create a new template, do the following:
- On the Gradebook Menu, click
next to Template Options, and then click New.
In the Select Columns dialog box, enter a descriptive name into the Template Name box.
Default templates have brackets around their names—for example, [Name & Average] and [RC View]. To avoid confusion, we recommend that you avoid square brackets when naming your custom templates.
Optional: Select the Hide All Score Columns check box to hide gradebook score columns in this template view.
Hiding all score columns is useful when you want to display only demographic columns.
To add columns to your template, do either of the following:
- To include demographic columns in your template, select a column from the Demographic & System Columns list, and then click
to move that column to the Selected Demographic & System Columns list.
- To include gradebook columns in your template, select a column from the Gradebook Columns list, and then click
to move that column to the Selected Gradebook Columns list.
You can hold the Ctrl or Shift key and click to select multiple columns. You can also select one column, hold the Shift key, and then click any column below the first to select these columns and every column in-between. To remove columns from the Selected Gradebook Columns list, click
.
- Optional: in the gradebook.
Next Steps
To make changes to your template in the future, just edit the layout: Edit an Existing Template. If you no longer need the template, you can permanently delete it: Delete a Template.
Concept Information
TeacherPlus HTML5: TeacherPlus Gradebook Templates | https://docs.rediker.com/guides/teacherplus-gradebook/html5/templates/tpg-t-create-a-template.htm | 2018-12-09T23:22:06 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.rediker.com |
knife serve¶
Use the knife serve subcommand.. | https://docs-archive.chef.io/release/12-13/knife_serve.html | 2018-12-10T00:43:48 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs-archive.chef.io |
Meet Relay
From the innovators at Republic Wireless comes Relay—a nationwide cell phone alternative that kids love and parents can trust. With Relay, kids get the simple pleasures of an analog childhood, with all the safety of modern connectivity.
Go far, stay close
Relay works a lot like an old-fashioned walkie-talkie, except that it’s powered by both WiFi and 4G LTE cellular—so you can connect down the street, across the country, and anywhere you’d use a cell phone. And parents with smartphones can connect with their kids’ Relay devices using the Relay app (for iOS, Android, and Microsoft).
With Relay, you can start a two-way conversation by pushing a single button—and stop worrying about screen addiction, questionable content, privacy, or the other concerns that come with putting a smartphone in the hands of young children.
Talk, listen, learn, and move
Kids can use Relay to check in with family and talk with friends. It’s weather-resistant and kid-tough, and unlike kid-tracker watches, Relay never feels like a tether.
Media Inquiries and Requests for Demos
On deadline?
Email our PR team at [email protected] with ON DEADLINE in the Subject
or call 732-718-4214
Press Releases
A Kid Cell Phone Alternative – Republic Wireless Launches Screen Free Relay with Focus on Simplicity, Safety, and Data Privacy
WiFi Calling Leader Republic Wireless Aims to Minimize Smartphone Addiction for Families with New Products
Images
Product images
High-resolution images of Relay
Download
Contextual images
Images of the Relay mobile app screens
Download
Relay app images
Images of the Relay mobile app screens
Download
Whitesheet
In-depth PDF with full product description and tech specs
Download
Logos
Official high-resolution brand and product logos in EPS and PNG formats
Download
B-Roll video
Digital footage for broadcast or internet use. Please email us when using any footage.
Download | https://docs.relaygo.com/other/media-and-press/media-information-relay-by-republic | 2018-12-10T01:04:34 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.relaygo.com |
Contents IT Business Management Previous Topic Next Topic Use a task list ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Use a task list Task lists allow tasks to be grouped, categorized, and managed. About this task You can: Create your own personal lists to group and view your own tasks, for example all tasks associated with a specific project. View filtered lists to display standard information, such as which tasks are due tomorrow. Create personal filtered lists to display your own reports, using standard condition builders. Procedure Navigate to Personal Tasks > Lists > Personal Lists to view your own task lists. By default, two task lists are provided: Personal and Work. Click New. Enter the list details: Name: Enter the list name. Default List: Select the check box to make this the default list which new tasks are automatically assigned to. Only one list can be the default list. Click Submit. The list appears alongside the other lists. Create a personal taskViewing Filtered ListsFiltered lists show reports of tasks that you are associated with, that meet specific standard filter conditions.Using Filtered Personal ListsYou can create filtered personal lists to show tasks that meet a customized filter condition.Related TasksCreate a personal taskActivate Personal Task ManagementRelated ConceptsPersonal Tasks on mobile devicesRelated ReferencePersonal Tasks example On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/geneva-it-business-management/page/product/personal_tasks/task/t_UsingLists.html | 2018-12-10T00:17:41 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.servicenow.com |
Message-ID: <477563829.6865.1413798934852.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_6864_1454214848.1413798934851" ------=_Part_6864_1454214848.1413798934851 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The apache web server is frequently used as a server in front of a servl=
et container.
While there are no real technical reasons to front Jett= y with apache, sometimes this is needed
for software load balancing, = or to fit with a corporate infrastructure, or simply to stick with a known = deployment structure.
There are 3 main alternative for connection Apache to Jetty:=20
Using the HTTP Connectors is greatly preferred, as Jetty performs signif= icantly better with HTTP and the AJP protocol is poorly documented and ther= e are many version irregularities. If AJP is to be used, the then mod_proxy= _ajp module is preferred over mod_jk. Previously, the load balancing capabi= lities of mod_jk meant that it had to be used (tolerated), but with apache = 2.2, mod_proxy_balancer is avail= able and load balance over HTTP and AJP connectors.=20
Apache has a mod_proxy module available for almost all versions of apach= e. However, prior to apache 2.2, only reverse proxy features were available= and mod_proxy_balancer was not available for load balancing.=20
Documentation for mod_proxy is available for:=20
The configuration file layout for apache varies greatly with version and= distribution, but to configure mod_proxy as a reverse proxy, the follow co= nfiguration is key:=20
LoadModule proxy_module modules/mod_proxy.so=20
ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy>=20
ProxyPass /test
ProxyPassRever= seconfiguration be used so that apache can rewrite any URLs in head= ers etc. However, if you use the
ProxyPreserveHostconfigurati= on, Jetty can generate the correct URLs and they do not need to be rewritte= n:
ProxyPreserveHost On=20
ServletRequest= #getRemoteAddr()) you can use the forwarded property on <= code>AbstractConnector which interprets the mod_proxy_http "x-forwarded-" headers instead= :> </New> </Arg> </Call> ... </Configure>=20
ServletRequest#getServerName() and
ServletRequest#getServerPort()(if headers are not avai= lable):=20> <Set name=3D"hostHeader">example.com:81</Set> </New> </Arg> </Call> ... </Configure>=20
ProxyStatus On=20
The situation here is:=20
https http ---------> Apache -------> Jetty=20
If you want to offload the SSL onto Apache, and then use plain http requ= ests to your Jetty backend, you need to configure Jetty to use https:// in = all redirected requests.=20
You can do that by extending the Connector class of your choice, eg the = SelectChannelConnector, and implement the customize(EndPoint, Request) meth= od to force the scheme of the Request to be https like so ( don't f= orget to call super.customize(endpoint,request)! ):=20
public void customize(org.mortbay.io.EndPoint endpoint, Request request) th= rows IOException { request.setScheme("https"); super.customize(endpoint, request); }=20
If you need access on Jetty to some of the SSL information accessible on=
Apache, then you need to some configuration tricks on Apache to insert the=
SSL info as headers on outgoing requests. Follow the Apache configuration =
suggestions on this tutorial which shows you how to use
mod_head=
ers to insert the appropriate request headers. Of course you will al=
so need to code your application to look for the corresponding custom reque=
st headers bearing the ssl information.
With apache 2.2 mod_proxy is able to u= se the extension mod_proxy_balancer<= /a>=20
The configuration of mod_proxy_balancer is similar to pure mod_proxy, ex=
cept that
balancer:// URLs may be used as a protocol instead o=
f
http:// when specifying destinations (workers) in
Prox=
yPass elements.
# map to cluster with session affinity (sticky sessions) ProxyPass /balancer ! ProxyPass / balancer://my_cluster/ stickysession=3Djsessionid nofailover=3D= On <Proxy balancer://my_cluster> BalancerMember route=3Djetty1 BalancerMember route=3Djetty2 </Proxy>=20
=20
Proxy balancer:// -= defines the nodes (workers) in the cluster. Each member may be a
htt=
p:// or
ajp:// URL or another
balancer:// =
URL for cascaded load balancing configuration.
If the worker name is = not set for the Jetty servers, then session affinity (sticky sessions) will= not work. The JSESSIONID cookie must have the format
<sessionID&g=
t;.<worker name>, in which
worker name has the sa=
me value as the
route specified in the BalancerMember above (i=
n this case "jetty1" and "jetty2"). See this article f=
or details. The following can be added to the
jetty-web.xml in=
the
WEB-INF directory to set the worker name.
<Configure class=3D"org.mortbay.jetty.webapp.WebAppContext">= ; <Get name=3D"sessionHandler"> <Get name=3D"sessionManager"> <Call name=3D"setIdManager"> <Arg> <New class=3D"org.mortbay.jetty.servlet.HashSessionIdMana= ger"> <Set name=3D"WorkerName">jetty1</Set> </New> </Arg> </Call> </Get> </Get> </Configure>=20
Apache provide mod_status and Balancer Manager Support so that= the status of the proxy and balancer can be viewed on a web page. The foll= owing configuration enables these UIs at /balancer and /status URLs:=20
<Location /balancer> SetHandler balancer-manager Order Deny,Allow Deny from all Allow from all </Location> ProxyStatus On <Location /status> SetHandler server-status Order Deny,Allow Deny from all Allow from all </Location>=20
These UIs should be protected from external access. | http://docs.codehaus.org/exportword?pageId=76447782 | 2014-10-20T09:55:34 | CC-MAIN-2014-42 | 1413507442420.22 | [] | docs.codehaus.org |
Note: these interpretations are for asme committee use only. They are not to be duplicated or used for other than asme committee business
Скачать
3.91 Mb.
Название
Note: these interpretations are for asme committee use only. They are not to be duplicated or used for other than asme committee business
страница
40/120
Дата конвертации
12.02.2013
Размер
3.91 Mb.
Тип
Документы
1
...
36
37
38
39
40
41
42
43
...
120
Interpretation: VIII-1-83-142
(Same as VIII-1-83-140)
Subject: Section VIII, Division 1; UG-28(f) and UG-1 1 6
Date Issued: June 23, 1983
File: BC83-168
Question: UG-28(f) of Section VIII, Division 1 indicates that vessels stamped for external pressure of 15 psi or less shall be designed for the smaller of 15 psi or 25% more than the maximum possible external pressure. If a vessel is to be stamped for 10 psi external pressure, is the required design pressure 1.25 X 10 or 12..5 psi?
Reply: Yes.
Interpretation:
VIII-1-83-143 (Same as VIII-1-83-137)
Subject: Section VIII, Division 1; Welding After Hydrostatic Testing
Date Issued: June 27, 1983
File: BC82-222
Question (1): Is it permissible to have cosmetic welding performed after the final hydrostatic test to comply with UW-35 in Section VIII, Division 1 if no subsequent hydrostatic test is made?
Reply (1): No.
Question (2): Are there any other means of nondestructive testing, such as MT, PT, or UT, the use of which after cosmetic welding would preclude the necessity of a second hydrostatic test?
Reply (2): No.
Interpretation: VIII-1-83-144
Subject: Section VIII, Division 1; UW-13(e) and UG-93(d)(3)
Date issued: June 27, 1983
File: BC83-198
Question (1): Does the term "rolled plate" in UW-13(e) of Section VIII, Division 1 refer to flat plate as rolled at the mill?
Reply (1): Yes.
Question (2): Is it required as described in UG-93(d)(3) of Section VIII, Division 1 to examine by magnetic particle or liquid penetrant a joint preparation similar to the one shown in Fig. UW-1.32 sketch (c) except that both plates are of the same dimension (3/4 in.) and the b dimension is the thickness of the plate?
Reply (2): The joint described in Question (2) is not acceptable. Flat closure plates on cylindrical vessels would require compliance to Fig. UW-13.2 for tp
dimension in sketches (b), (c), and (d), and the requirements of UG-93(d)(3) shall apply.
Interpretation: VIII-1-83-145
Subject: Section VIII, Division 1; Markings, UG-116(c)
Date Issued: June 27, 1983
File: BC83-200
Question: Is it required under UG-116(c) of Section VIII, Division 1 that the letters "S-W" be applied under the Code Symbol for a vessel constructed with seamless hemispherical heads arc welded to a seamless shell?
Reply: Yes. See the concluding paragraph of UG-116(c).
Interpretation: VIII-1-83-146
Subject: Calibration of Welding Equipment, Sections 1, IV, and VIII, Divisions 1 and 2
Date Issued: June 30, 1983
File: BC80-38C
Question: Is it a requirement of Section I, IV, or VIII, Divisions 1 and 2, that manual, semiautomatic, and automatic welding equipment be calibrated, and if so, that the devices used for calibration be calibrated?
Reply: No.
Note: This Interpretation also appears as 1-83-45, IV-83-22, and VIII-2-83-1 7.
Interpretation: VIII-1-83-147
Subject: Tack Welding, Sections 1, IV, and VIII-I
Date Issued: June
30, 1983
File: BC80-173
Question: A manufacturer, holder of a valid U, S, H, or M Certificate of Authorization, sub-contracts an outside organization to roll a shell, which is to be part of a stamped boiler or vessel, and to perform tack welding to secure the longitudinal seam alignment. The outside organization is not part of the Certificate holder's organization. Under what conditions may this work be performed?
Reply: In accordance with PW-31 of Section I, HW-810 of Section IV, and UW-31 of Section VIII, Division 1, tack welds, whether removed or left in place, shall be made using a fillet weld or butt weld procedure qualified to Section IX. Tack welds to be left in place shall be made by welders qualified to Section IX and shall be examined visually for defects, and if found to be defective, shall be removed, It is not necessary that a subcontractor performing such tack welds for the boiler or vessel manufacturer be a holder of an ASME Certificate of Authorization. The final boiler or vessel manufacturer shall maintain the controls to assure the necessary welding procedure and performance qualifications are met in
order
to satisfy Code requirements
.
Note: This Interpretation also appears as 1-83-46 and IV-83-23.
Interpretation: VIII-82-45 (Withdrawn)
Subject: VIII-1,
UCS-66(b)
Date Issued: March 23, 1982
File: BC81-038
Question (1): For a pressure vessel designed for low temperature service (-50°F) and internal pressure only that is horizontally supported on a number of ring girders directly attached to the vessel by welding and that utilizes backing strips, left in place in the welding of the head to shell joints, must the ring girders and backings strips meet the requirements of UCS-66(b) for low temperature service?
Reply (1): Ring girders directly attached to the vessel by welding and backing strips left in place as described in Question (1) are considered to material used in the construction of the vessel. Therefore they are subject to the provisions of UCS-66(b).
Question (2): In the vessel described in Question (1), may SA-36 material be used for ring girders and the backing strips?
Reply (2): No. See UCS-6 and UCS-66.
Note: On November 12, 1982, this Interpretation was withdrawn for further Committee consideration. This withdrawal notice should have appeared in Interpretations No. 12-Section VIII,-1, covering Interpretations issued from July 1, 1982, through December 31, 1982, and revised Interpretations issued from July 1, 1982, through April 1, 1983.
Interpretation: VIII-1-83-133R
Subject: Section VIII, Division 1; UW-51(b) and Appendix 4
Date Issued: October 27, 1983
File: BC82-718*
Question: If a rounded indication on a weld radiograph is interpreted as incomplete fusion, is it rejectable as addressed in UW-51(b)(1) of Section VIII, Division 1, even if it is acceptable as a rounded indication as defined in 4-2(a) of Appendix 4 and does not exceed the size limitation given in Table 4-1?
Reply: Yes.
Interpretation: VIII-1-83-149
Subject: Section VIII, Division 1; UG-81(b)
Date Issued: August 19, 1983
File: BC83-036
Question: In UG-81(b) of Section VIII, Division 1, what shall be considered the spherical portion of an ellipsoidal head?
Reply: The part of an ellipsoidal head that is to be considered the spherical portion is that part which is located within a circle the center of which coincides with the center of the head and the diameter of which is equal to 80% of the shell diameter.
Interpretation: VIII-1-83-150
Subject: Section VIII, Division 1; Appendix L-7
Date Issued: August 19, 1983
File: BC83-120
Question: In Appendix L-7 of Section VIII, Division 1, when fr1
=
1, will A and A1, be equal to zero?
Reply: No. In this case, A
=
dtrF and A,
=
d(E1 t - Ftr,).
Interpretation: VIII-1-83-151
Subject: Section VIII, Divisions 1 and 2, ULW-16(b) and AD-110(a)
Date Issued: August 23, 1983
File: BC82-652
In the fabrication of multilayered vessels, the following parameters exist. The inner shell is made of carbon steel, low alloy, or austenitic material. Over the inner shell a dummy layer is used, which is only tack welded. Over the dummy layer the other layers are fully welded. UG-27 and AD-201 of Section VIII, Divisions I and 2, require R to be the inside radius in deriving the shell thickness calculation.
Question (1): Is it permissible in calculating the inside radius R to use the dimension corresponding to the inner shell I.D. only, where the dummy layer is between the inner shell and regular layers?
Reply (1): Yes.
Question (2): Can the dummy layer which is only tack welded be considered as part of the thickness of the shell contributing to the strength and resisting the loading conditions, since it is securely in position between the inner shell and other regular layers?
Reply (2): No.
Question (3): Can the inner shell strength be considered in determining the shell thickness when a dummy layer is present subject to meeting the requirements of AD-110(a) and ULW-16(b)?
Reply (3): Yes.
Note: This Interpretation also appears as VIII-2-83-18.
Interpretation: VIII-1-83-152
Subject: Section VIII, Division 1; Stress Values and Joint Efficiencies
Date Issued: August 23, 1983
File: BC83-167
Question(1): Are there any nondestructive tests that may be used on welded pipe or tubing to increase the allowable stress in Section VIII, Division 1?
Reply (1): No.
Question (2): Are there any nondestructive tests that may be used on welded pipe or tubing to increase the joint factor in Section VIII, Division 1?
Reply (2): No.
Interpretation: VIII-1-83-153
Subject: Section VIII, Divisions 1 and 2, Appendix 1-5 and AD-210
Date Issued: August 23, 1983
File: BC83-169
Question (1): Can Appendix 1-5 of Section VIII, Division 1, be applied to an offset cone such as a kettle-type reboiler?
Reply (1): Yes.
Question (2): Can AD-210 of Section VIII, Division 2, be applied to offset cones?
Reply (2): No.
Note: This Interpretation also appears as VIII-2-83-20.
Interpretation: VIII-1-83-154
Subject: Section VIII, Division 1; Appendix 14-40
Date Issued: August 23, 1983
File: BC83-170
Question: When a channel cover consisting of a flat cover, a bolted flange, and a hublike projection is machined from a solid plate, and is attached by through bolting rather than welding, do the rules of Section VIII, Division 1, Appendix 14-40, apply? The hublike projection is used to provide a gasket seating surface in the same manner as a raised face on a conventional flange,
Reply: No.
Interpretation:
VIII-1-83-155
Subject-. Section VIII, Divisions 1 and 2 PWHT Welded Buttered Joints; Dissimilar Metal Attachments
Date Issued: August 23, 1983
File: BC83-197
Question: In Section VIII, Division 1 or 2 construction, is it permissible to weld an austenitic stainless (P-No. 8) steel nozzle into carbon steel (P-No. 1) shell or head after postweld heat treatment, provided the carbon steel weld joint preparation is buttered in accordance with the requirements of Section IX with an austenitic stainless steel before postweld heat treatment?
Reply: Yes.
Note: This Interpretation also appears as VIII-2-83-21.
Interpretation: VIII-1-83-156
Subject: Section VIII, Division 1; UW-51(a)(3), Winter 1982 Addenda
Date Issued: August 23, 1983
File: BC83-201
Question (1): Is it required that the manufacturer requalify and recertify his NDE personnel to a program developed using the 1980 Edition of SNT-TC-1A as a guide, as referenced in UW-51(a)(3) of Section VIII, Division 1, if the personnel are presently qualified and certified to a program developed using the 1975 Edition of SNT-TC-1A as a guide?
Reply (1): No. However, when his present certification expires, he must then be requalified and recertified to a program developed using the latest edition of the SNT-TC-IA document adopted by the Code.
Question (2): May NDE personnel qualified and certified to the manufacturer's program developed using the 1980 Edition of the SNT-TC-1A as a guide perform nondestructive examinations on items being constructed to an addendum which referenced the 1975 Edition of SNT-TC-1A?
Reply (2): Yes.
Interpretation: VIII-1-83-157
Subject: Section VIII, Division 1; Appendix A, A-1(a)
Date Issued: August 23, 1983
File: BC83-207
Question: Do the present rules in Appendix A for tubesheet design apply only to tubesheets which derive some of the support of the pressure load from the tubes?
Reply: Yes. The present rules do not apply to U-tube and floating head construction where no support of the tubesheet comes from the tubes.
Interpretation: VIII-1-83-158
Subject: Section VIII, Division 1; UW-51(a)(3) and 12-2
Date Issued: August 23, 1983
File: BC83-210
Question (1): Are the recommended guidelines contained in SNT-TC-1A 1980 considered minimum requirements to be addressed in the employer's written practice?
Reply (1): No.
Question (2): May the detailed recommendations contained in SNT-TC-1A 1980 be reviewed and modified by the employer to meet particular needs, e.g., limited certification?
Reply (2): Yes, as described in Interpretation V-80-05.
Question (3): To what extent may the detailed recommendations of SNT-TC-1A 1980 be modified by the employer to meet particular needs?
Reply (3): The extent that the program may be modified can only be determined by the Manufacturer subject to the scope of his activities. The program will be reviewed and accepted at the time of the joint review and on an ongoing basis by the Authorized Inspector.
Interpretation: VIII-1-83-159
Subject: Section VIII, Division 1; UW-9(d)
Date Issued: August 23, 1983
File: BC83-262
Question: Would a welded head and/or head skirt with longitudinal welds be required to satisfy the rules of UW-9(d) for shell courses of Section VIII, Division 1?
Reply: Yes.
Interpretation: VIII-1-83-160
Subject: Section VIII, Division 1; UW-2 and UW-16
Date Issued: August 24, 1983
File: BC83-034
Question: Do the provisions in UW-2 of Section VIII, Division 1, take precedence over general
requirements in other paragraphs? That is, does a requirement, such as that given in UW-2(b)(4), which requires all Category D joints to be full penetration welds, take precedence over a general rule, such as that given in UW-16(d), which permits nozzles to be attached by fillet and partial penetration welds?
Reply: Yes. Throughout Section VIII, Division 1, there are special provisions which take precedence over general rules. For example, the requirements of UW-2(b) relate to special service restrictions for vessels operating at temperatures below -20'F. There are many other rules for specific design and construction which take precedence over general rules.
Interpretation: VIII-1-83-161
Subject: Section VIII, Division 1; UG-93(d)(3)
Date Issued: August 24, 1983
File: BC83-268
Question: Does a tubesheet over 1/2
in. in thickness welded to the inside of a shell to form a corner joint as shown in Fig. UG-34 sketch (f) require examination in accordance with the requirements of UG-93(d)(3)?
Reply: No.
Interpretation: VIII-1-83-162
Subject: Section VIII, Appendix 9
Date Issued: August 25, 1983
File: BC83-246
Question: Is it the intent of Fig. 9-5 sketch (a) in Section VIII, Division 1, that the dimensions of Type 2 and 4 jackets be the same as Type 1 jackets except for the indicated weld sizes
?
Reply: Yes.
1
...
36
37
38
39
40
41
42
43
...
120
Похожие:
Government Contract Review Committee Committee Minutes
B. Committee: Research and Technology Committee C. Subcommittee
Joint Committee on Enterprise and Small Business
1Meroney, R. N. and Giedt, W. H., The effect of mass injection on heat transfer from a partially dissociated gaseous stream, asme
The following are some comments for you to revise the draft manuscript for consistency with the publication guidelines provided at asme funded web site
Communication from the commission to the european parliament, the council, the european economic and social committee, the committee of the regions and the european central bank
International Scientific Advisory Committee International Organizing Committee
Editorial Committee
Development control committee
International Organizing Committee | http://lib.convdocs.org/docs/index-130636.html?page=40 | 2014-10-20T09:40:38 | CC-MAIN-2014-42 | 1413507442420.22 | [] | lib.convdocs.org |
Item Variations
In an eCommerce context, items (products) are often offered in several variations, such as different sizes, fits and/or colors. Constructor's item API allows customers to define all the variations available for each item, so that end users can choose to filter on size, fit and/or color.
A single variation is a fully qualified product. For example, if an item comes in 3 sizes and 4 colors, you can upload 12 variations of that item, 1 for each size/color combination. All information that is shared across variations of a single item should be uploaded at the item level. All variation-specific information must be uploaded at the variation level.
#Matching
Variation search and matching works like 'search within search'.
Constructor's search (as well as browse and autocomplete) first identifies all products that match a query (where a query encompasses a search term and zero or more filter selections; or a browse group id and zero or more filter selections).
Next Constructor searches within each item (typically, a product) to identify all matching variations for the query in question.
Finally, Constructor identifies the best matching variation among those that match the query. First the algorithm will look for any variation with
is_default set to
true, then will search for the best variation match based on NLP, ranking optimizations and variation suggested_score.
#Results
Items are returned one-by-one, with the best matching variation's data presented at the top level for the item in question. Other matching variations are returned in an array on the item in question.
#Endpoints
The item and bulk item endpoints can accept up to 100 variations on each product.
While the item update and patch methods allow updating certain fields on an item while leaving others untouched, the variation array must be updated in full when using the REST endpoints. However, the catalog update endpoints allow updating just certain variations. | https://docs.constructor.io/rest_api/item_variations/ | 2021-06-12T16:32:01 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.constructor.io |
Handling mutually exclusive dependencies
Introduction to component
By default, Gradle will fail if two components in the dependency graph provide the same capability. Because most modules are currently published without Gradle Module Metadata, capabilities are not always automatically discovered by Gradle. It is however interesting to use rules to declare component capabilities in order to discover conflicts as soon as possible, during the build instead of runtime.
A typical example is <= 3) } } } } }
Now the build is going to fail whenever the two components are found in the same dependency graph.
Selecting between candidates
At some point, a dependency graph is going to include either incompatible modules, or modules which are mutually exclusive. For example, you may have different logger implementations and you need to choose one binding. Capabilities help realizing that you have a conflict, but Gradle also provides tools to express how to solve the conflicts.
Selecting between different capability candidates
In the relocation example above, Gradle was able to tell you that you have two versions of the same API on classpath: an "old" module and a "relocated" one. Now we can solve the conflict by automatically choosing the component which has the highest capability version:
configurations.all { resolutionStrategy.capabilitiesResolution.withCapability('org.ow2.asm:asm') { selectHighestVersion() } }
configurations.all { resolutionStrategy.capabilitiesResolution.withCapability("org.ow2.asm:asm") { selectHighestVersion() } }
However, fixing by choosing the highest capability version conflict resolution is not always suitable. For a logging framework, for example, it doesn’t matter what version of the logging frameworks we use, we should always select Slf4j.
In this case, we can fix it by explicitly selecting slf4j as the winner:
configurations.all { resolutionStrategy.capabilitiesResolution.withCapability("log4j:log4j") { def toBeSelected = candidates.find { it.id instanceof ModuleComponentIdentifier && it.id.module == 'log4j-over-slf4j' } if (toBeSelected != null) { select(toBeSelected) } because 'use slf4j in place of log4j' } }
configurations.all { resolutionStrategy.capabilitiesResolution.withCapability("log4j:log4j") { val toBeSelected = candidates.firstOrNull { it.id.let { id -> id is ModuleComponentIdentifier && id.module == "log4j-over-slf4j" } } if (toBeSelected != null) { select(toBeSelected) } because("use slf4j in place of log4j") } }
Note that this approach works also well if you have multiple Slf4j bindings on the classpath:
bindings are basically different logger implementations and you need only one.
However, the selected implementation may depend on the configuration being resolved.
For example, for tests,
slf4j-simple may be enough but for production,
slf4-over-log4j may be better.
For more information, check out the the capabilities resolution API. | https://docs.gradle.org/current/userguide/dependency_capability_conflict.html | 2021-06-12T17:28:42 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.gradle.org |
Location
Documentation Home
Palo Alto Networks
Support
Live Community
Knowledge Base
Cortex
Cortex XDR
Cortex XDR™ Pro Administrator’s Guide
Investigation and Response
Investigate Alerts
Timeline View
Document:
Cortex XDR™ Pro Administrator’s Guide
Timeline View
Download PDF
Last Updated:
Tue Jun 08 07:46:26 PDT 2021
Timeline View
From the Cortex® XDR™ management console you can view the sequence (or timeline) of events and alerts that are involved in any particular threat.
The Timeline provides a forensic timeline of the sequence of events, alerts, and informational BIOCs involved in an attack. While the Causality View of an alert surfaces related events and processes that Cortex XDR identifies as important or interesting, the Timeline displays all related events, alerts, and informational BIOCs over time.
Cortex XDR presents the Timeline in four parts:
Section
Description
CGO (and process instances that are part of the CGO)
Cortex XDR displays the Causality Group Owner (CGO) and the host on which the CGO ran in the top left of the timeline. The CGO is the parent process in the execution chain that Cortex XDR identified as being responsible for initiating the process tree. In the example above,
wscript.exe
is the CGO and the host it ran on was
HOST488497
. You can also click the blue corner of the CGO to view and filter related processes from the Timeline. This will add or remove the process and related events or alerts associated with the process from the Timeline.
Timespan
By default, Cortex XDR displays a 24-hour period from the start of the investigation and displays the start and end time of the CGO at either end of the timescale. You can move the slide bar to the left or right to focus on any time-gap within the timescale. You can also use the time filters above the table to focus on set time periods.
Activity
Depending on the type of activities involved in the CI chain of events, the activity section can present any of the following three lanes across the page:
Alerts—The alert icon indicates when the alert occurred.
BIOCs—The category of the alert is displayed on the left (for example: tampering or lateral movement). Each BIOC event also indicates a color associated with the alert severity. An informational severity can indicate something interesting has happened but there weren’t any triggered alerts. These events are likely benign but are byproducts of the actual issue.
Event information—The event types include process execution, outgoing or incoming connections, failed connections, data upload, and data download. Process execution and connections are indicated by a dot. One dot indicates one connection while many dots indicates multiple connections. Uploads and Downloads are indicated by a bar graph that shows the size of the upload and download.
The lanes depict when activity occurred and provide additional statistics that can help you investigate. For BIOC and Alerts, the lanes also depict activity nodes—highlighted with their severity color: high (red), medium (yellow), low (blue), or informational (gray)—and provide additional information about the activity when you hover over the node.
Related events, alerts, and informational BIOCs
Cortex XDR displays up to 100,000 alerts, BIOCs (triggered and informational), and events. Click on a node in the activity area of the Timeline to filter the results you see here. Similar to other pages in Cortex XDR, you can create filters to search for specific events.
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/cortex/cortex-xdr/cortex-xdr-pro-admin/investigation-and-response/investigate-endpoint-alerts/timeline-view.html | 2021-06-12T17:36:41 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.paloaltonetworks.com |
NOTE: RSS is integrated as a default application for all users. Only in the event that you have removed this or you need two separate instances of the application do you need to integrate using the below steps. RSS doesn't require your login details.
Go to your YellowAnt Dashboard (yoursubdomain.yellowant.com) or head over to the YellowAnt Marketplace.
In the search bar, look for "RSS" or simply click the icon. If you have already integrated the application, you will be able to see it under "My Applications".
4. Once you find the application either in the dashboard or on the Marketplace click on view. You will be taken to a page where you'll find the integrate option/button. Click on the integrate button.
5. Since RSS doesn't take any user login, with that simple step, it is now integrated and you get a message on your chat application for the same. You will be able to see it under your applications in the Dashboard too. | https://docs.yellowant.com/integrating-applications/applications-on-yellowant/rss | 2021-06-12T17:20:55 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.yellowant.com |
The Organization Settings screen allows you to configure things at the organizaton level - i.e. things that are not App specific. Examples include:
Container Images
External Resources
Environment Types
API Tokens
Organization members
Whether you will be able to access the Organization settings or what you will be able to update inside them will depend on the Role you have assigned as part of your Organization Membership.
You can access the Organization Settings via the Main Menu in the Top Bar.
The Tabs can be used to switch between different parts of the Organization Settings.
The Close button is used to return to the screen you were previously on. | https://docs.humanitec.com/reference/user-interface/organization-settings | 2021-06-12T16:54:45 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.humanitec.com |
Public Mirrors to DQS Migration¶
Moving your queries from Spamhaus’ Public mirrors to our Data Query Service requires a few changes to your configuration that can be made quickly and do not require a significant amount of engineering time to accomplish.
Complete the Free Data Query Service Account Form
Verify your email address
Receive your DQS key, that is unique to you.
A query against the public mirrors looks like the following (using zen as the example):
2.0.0.127.zen.spamhaus.org
This is, on the other hand, what the equivalent query against DQS looks like:
2.0.0.127.<key>.zen.dq.spamhaus.net
In other words:
Put your DQS key at the very beginning
Add the zone you want to query (such as
zen,
xbl,
dbl, …)
Add
dq.spamhaus.netat the end (note it’s
.net, not
.org)
These configuration changes will have to happen for any Spamhaus DNSBL zone you query in your configuration. The following are all of the zones you can query on the public mirrors and the corresponding data query service zones and how they should be formatted. | https://docs.spamhaus.com/datasets/docs/source/70-access-methods/data-query-service/015-migrating-to-dqs.html | 2021-06-12T17:46:36 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.spamhaus.com |
Signatures
Signature is the evidence to prove the sender owns the transaction. It will be created from the actions outlined below:
Compose a data structure. please note
msgs,
memo,
source,
dataare the same as in the above
payload.
chain_id: a string, unique ID for the Chain, it stays the same for most time, but may vary as Binance Chain evolves;
account_number: a string for a 64-bit integer, an identifier number associated with the signing address
sequence: a string for a a 64-bit integer, please check accounts
memo: a string, a short sentence of remark for the transaction
msgs: a byte array, json encoded transaction messages, please check the encoding doc.
source: a string for a 64 bits integer, which is an identifier for transaction incoming tools
data: byte array, reserved for future use
Here is an example in go-sdk:
golang
// StdSignMsg def
type StdSignMsg struct {
ChainID string `json:"chain_id"`
AccountNumber int64 `json:"account_number"`
Sequence int64 `json:"sequence"`
Msgs []msg.Msg `json:"msgs"`
Memo string `json:"memo"`
Source int64 `json:"source"`
Data []byte `json:"data"`
}
Encode the above data structure in json, with ordered key, Specifically:
- Maps have their keys sorted lexicographically
- Structs keys are marshalled in the order defined in the struct
Sign SHA256 of the encoded byte array, to create an ECDSA signature on curve Secp256k1 and serialize the
Rand
Sresult into a 64-byte array. (both
Rand
Sare encoded into 32-byte big endian integers, and then
Ris put into the first 32 bytes and
Sare put into the last 32 bytes of the byte array. In order to break
S's malleability,
Sset to
curve.Order() - Sif
S > curnve.Order()/2.)
The
signature will be encoded together with transaction message and sent as
payload to Binance Chain node via RPC or http REST API, as described in the above section. | https://docs.binance.org/guides/concepts/signature.html | 2021-06-12T18:03:01 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.binance.org |
Using feeds
Use RSS feeds to get content updates without visiting the community.
By subscribing to a feed and using a feed reader (also known as an aggregator), you can discover what has changed in the application without having to visit it. For more information, see What are feeds?.
Note: RSS subscriptions are not supported if you have SSO enabled in your community because the RSS reader cannot follow HTTP redirects for authentication.
To subscribe to a feed:
- To subscribe to an RSS feed:You can see a list of available feeds for that place.
- To subscribe to a feed, click on the link that corresponds to the content type you want.You can subscribe to specific content types, such as only discussions posted in a group or only the comments of a blog.
- Select the application you want to use as your feed reader and click Subscribe Now.When you subscribe, you may need to enter your user name and password so that Jive knows it's not giving feed information to just anyone.
Attention: In order to subscribe to RSS feeds with Outlook 2007, you need to use the limited workaround if you use Internet Explorer. For more information, see. | https://docs.jivesoftware.com/9.0_on_prem_int/end_user/jive.help.core/user/HowDoIUseFeedsRemote.html | 2021-06-12T17:49:28 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.jivesoftware.com |
When you create a repl, it can either be public or private.
Private repls can only be accessed by people in your team that are members, and by guests who are directly invited to these repls. You can make a repl private when you initially create it.
Public repls can be accessed by anyone who has the repl URL, allowing them to fork and copy everything on the repl. This is a great way to share code with people outside of the team. Only team members can edit the original repl.
You can toggle between "private" and "public" from inside a repl, as many times as you want, by clicking on the repl name in the top left of your screen.
You can fork any repl, private or public, by clicking on the same repl name and then clicking on the three-dot menu, and selecting "Fork". | https://docs.replit.com/pro/replManagement | 2021-06-12T18:27:53 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.replit.com |
Converts an input base64 value to text. Output type is String.Converts an input base64 value to text. Output type is String.
- base64 is a method of representing data in a binary format over text protocols. During encoding, text values are converted to binary values 0-63. Each value is stored as an ASCII character based on a conversion chart.
Typically, base64 is used to transmit binary information, such as images, over transfer methods that use text, such as HTTP.
NOTE: base64 is not an effective method of encryption.
- For more information on base64, see.:
base64decode(mySource)
Output: Decodes the base64 values from the
mySource column into text.
String literal example:
base64decode('GVsbG8sIFdvcmxkLiA=')
Output: Decodes the input value to the following text:
Hello, World. .
Syntax and Arguments
base64decode(column_string)
For more information on syntax standards, see Language Documentation Syntax Notes.
column_string
Name of the column or string constant to be converted.
- Missing string or column values generate missing string results.
- String constants must be quoted (
'Hello, World').
- Multiple columns and wildcards are not supported.
Usage Notes:
Usage Notes:
Tip: For additional examples, see Common Tasks.
Examples
Tip: For additional examples, see Common Tasks.
Example - base64 encoding and decoding. | https://docs.trifacta.com/display/SS/BASE64DECODE+Function | 2021-06-12T18:01:44 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.trifacta.com |
Files Metadata¶
moz.build Files provide a mechanism for attaching metadata to files. Essentially, you define some flags to set on a file or file pattern. Later, some tool or process queries for metadata attached to a file of interest and it does something intelligent with that data.
Defining Metadata¶
Files metadata is defined by using the
Files Sub-Context in
moz.build
files. e.g.:
with Files('**/Makefile.in'): BUG_COMPONENT = ('Firefox Build System', 'General')
This working example says, for all Makefile.in files in every directory underneath this one - including this directory - set the Bugzilla component to Firefox Build System :: General.
For more info, read the docs on Files.
How Metadata is Read¶
Files metadata is extracted in Filesystem Reading Mode.
Reading starts by specifying a set of files whose metadata you are
interested in. For each file, the filesystem is walked to the root
of the source directory. Any
moz.build encountered during this
walking are marked as relevant to the file.
Let’s say you have the following filesystem content:
/moz.build /root_file /dir1/moz.build /dir1/foo /dir1/subdir1/foo /dir2/foo
For
/root_file, the relevant
moz.build files are just
/moz.build.
For
/dir1/foo and
/dir1/subdir1/foo, the relevant files are
/moz.build and
/dir1/moz.build.
For
/dir2, the relevant file is just
/moz.build.
Once the list of relevant
moz.build files is obtained, each
moz.build file is evaluated. Root
moz.build file first,
leaf-most files last. This follows the rules of
Filesystem Reading Mode, with the set of evaluated
moz.build
files being controlled by filesystem content, not
DIRS variables.
The file whose metadata is being resolved maps to a set of
moz.build
files which in turn evaluates to a list of contexts. For file metadata,
we only care about one of these contexts:
Files.
We start with an empty
Files instance to represent the file. As
we encounter a files sub-context, we see if it is appropriate to
this file. If it is, we apply its values. This process is repeated
until all files sub-contexts have been applied or skipped. The final
state of the
Files instance is used to represent the metadata for
this particular file.
It may help to visualize this. Say we have 2
moz.build files:
# /moz.build with Files('*.cpp'): BUG_COMPONENT = ('Core', 'XPCOM') with Files('**/*.js'): BUG_COMPONENT = ('Firefox', 'General') # /foo/moz.build with Files('*.js'): BUG_COMPONENT = ('Another', 'Component')
Querying for metadata for the file
/foo/test.js will reveal 3
relevant
Files sub-contexts. They are evaluated as follows:
/moz.build - Files('*.cpp'). Does
/*.cppmatch
/foo/test.js? No. Ignore this context.
/moz.build - Files('**/*.js'). Does
/**/*.jsmatch
/foo/test.js? Yes. Apply
BUG_COMPONENT = ('Firefox', 'General')to us.
/foo/moz.build - Files('*.js'). Does
/foo/*.jsmatch
/foo/test.js? Yes. Apply
BUG_COMPONENT = ('Another', 'Component').
At the end of execution, we have
BUG_COMPONENT = ('Another', 'Component') as the metadata for
/foo/test.js.
One way to look at file metadata is as a stack of data structures.
Each
Files sub-context relevant to a given file is applied on top
of the previous state, starting from an empty state. The final state
wins.
Finalizing Values¶
The default behavior of
Files sub-context evaluation is to apply new
values on top of old. In most circumstances, this results in desired
behavior. However, there are circumstances where this may not be
desired. There is thus a mechanism to finalize or freeze values.
Finalizing values is useful for scenarios where you want to prevent wildcard matches from overwriting previously-set values. This is useful for one-off files.
Let’s take
Makefile.in files as an example. The build system module
policy dictates that
Makefile.in files are part of the
Build
Config module and should be reviewed by peers of that module. However,
there exist
Makefile.in files in many directories in the source
tree. Without finalization, a
* or
** wildcard matching rule
would match
Makefile.in files and overwrite their metadata.
Finalizing of values is performed by setting the
FINAL variable
on
Files sub-contexts. See the
Files documentation for more.
Here is an example with
Makefile.in files, showing how it is
possible to finalize the
BUG_COMPONENT value.:
# /moz.build with Files('**/Makefile.in'): BUG_COMPONENT = ('Firefox Build System', 'General') FINAL = True # /foo/moz.build with Files('**'): BUG_COMPONENT = ('Another', 'Component')
If we query for metadata of
/foo/Makefile.in, both
Files
sub-contexts match the file pattern. However, since
BUG_COMPONENT is
marked as finalized by
/moz.build, the assignment from
/foo/moz.build is ignored. The final value for
BUG_COMPONENT
is
('Firefox Build System', 'General').
Here is another example:
with Files('*.cpp'): BUG_COMPONENT = ('One-Off', 'For C++') FINAL = True with Files('**'): BUG_COMPONENT = ('Regular', 'Component')
For every files except
foo.cpp, the bug component will be resolved
as
Regular :: Component. However,
foo.cpp has its value of
One-Off :: For C++ preserved because it is finalized.
Important
FINAL only applied to variables defined in a context.
If you want to mark one variable as finalized but want to leave
another mutable, you’ll need to use 2
Files contexts.
Guidelines for Defining Metadata¶
In general, values defined towards the root of the source tree are
generic and become more specific towards the leaves. For example,
the
BUG_COMPONENT for
/browser might be
Firefox :: General
whereas
/browser/components/preferences would list
Firefox :: Preferences. | https://firefox-source-docs.mozilla.org/build/buildsystem/files-metadata.html | 2021-06-12T17:23:14 | CC-MAIN-2021-25 | 1623487586239.2 | [] | firefox-source-docs.mozilla.org |
Upgrading versions of transitive dependencies
Direct dependencies vs dependency constraints
A component may have two different kinds of dependencies:
direct dependencies are directly required by the component. A direct dependency is also referred to as a first level dependency. For example, if your project source code requires Guava, Guava should be declared as direct dependency.
transitive dependencies are dependencies that your component needs, but only because another dependency needs them.
It’s quite common that issues with dependency management are about transitive dependencies. Often developers incorrectly fix transitive dependency issues by adding direct dependencies. To avoid this, Gradle provides the concept of dependency constraints.
Adding constraints on transitive dependencies can also define a rich version constraint and support strict versions to enforce a version even if it contradicts with the version defined by a transitive dependency (e.g. if the version needs to be downgraded).
Dependency constraints themselves can also be added transitively. | https://docs.gradle.org/current/userguide/dependency_constraints.html | 2021-06-12T16:54:29 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.gradle.org |
Integer
The
Integer block allows us to enter whole number values into our graphs by linking an
Integer block's output to an integer type input of another block.
Integerblock to enter the integer 10 as an input into the
Get Uniswap Token Priceblock:
When the above graph is run, the price value output by the
Get Uniswap Token Price block will be the price of 10 of the tokens specified by the contract address in the
String block.
Integerblock to specify an integer literal, which is an integer whose value is known by the developer when they develop the graph, and can therefore be entered directly as we have done with the number 10. As with all the other base variable data types, we can also save integer values determined at runtime as persistent variables using
Set variableblocks, and we can retreive those values later using
Get variableblocks. | https://docs.graphlinq.io/blockTypes/1-baseVariable/3-integer/ | 2021-06-12T16:46:32 | CC-MAIN-2021-25 | 1623487586239.2 | [array(['https://i.imgur.com/Lro9Co3.png', None], dtype=object)] | docs.graphlinq.io |
Configuring ADFS to send claims using custom rules
To complete the prerequisites for Jive for SharePoint, an ADFS administrator with IT expertise needs to send claims by using a custom rule.
The following steps must be performed by the ADFS administrator with IT expertise.
To configure a custom rule for sending claims in ADFS:
- Open up the ADFS console.
- Click trust relationships and then right-click as shown in the following image:
- Open the Jive URL in a new tab and add
saml/metadatato the end. For example:
- If the file is not automatically downloaded as XML, download and rename it with a .xml extension.
- In the ADFS Console, click , as shown in the following image:
- Type or browse to the Federation metadata file location, and then click Next.
- Click Specify Display Name and enter the display name.
- Click Next. , then click
- Select Next. , then click
- In the Ready to Add Trust step, click Next.
- In the Finish step, select the Open the Edit Claims Rule dialog for this relying party trust when this wizard closes option.
- When the Edit Claims Rules for Jive SSO Integration dialog box opens, click Add Rule, as shown in the following image:
- In the Choose Rule Type step, select , then click Next.
- In the Configure Claim Rule step, type the Claim rule name, select Active Directory, and then select or type the following information in the table exactly as it appears below for Mapping of LDAP attributes to outgoing claim types:
- Click Finish.
- Once again, use the Edit Claims Rules for Jive SSO Integration dialog box to add a new rule by clicking Add Rule.
- In the Choose Rule Type step, select , then click Next.
- Type in the following text in the Custom Rule text box, at the same time customizing the settings for your environment:
- adfs3: Your ADFS server name.
- iqc01.com: The correct domain.
- ADFSClaimsID: The value you have entered as the Claims ID value in the SAML in the Jive Admin Console.
The transformation rule has four parts:
- Type ==
"…": The source of information defined as schema URL.For e-mail address: User Principle Name (UPN):
- Type =
"ADFSClaimsID": The name of the attribute ADFS sends to Jive on successful login.
ADFSClaimsIDis the name of the user mapping field to set in Jive’s SAML Admin Console. ADFS and Jive must match.
- Value =
"i:05.t|adfs3.mydomain.com|" + c.Value: The Claims ID realized by SharePoint for user identification.
- ValueType =
c.ValueType: The type is not used actively; it is a text field in Jive user profile. You can leave as is.
For more information on Claim Types, see ClaimTypes Members on Microsoft portal at Rules Examples
c:[Type == ""] => issue(Type = "ADFSClaimsID", Value = "i:05.t|adfs3.mydomain.com|" + c.Value, ValueType = c.ValueType);
Result:
i:05.t|adfs3.mydomain.com|[email protected]
- Classic NTLM ClaimsID
c:[Type == ""] => issue(Type = "ADFSClaimsID", Value = " i:0#.w|mydomain\" + c.Value, ValueType = c.ValueType);
Result:
i:0#.w|mydomain\user1
Customize these rules per customer to match the right Claims ID supported by the customer's SharePoint environment. The Claims ID can change from the examples above, except for the classic NTLM Claims ID that is standard when using NTLM authentication.
You can check the User Diagnostic script to verify that Claims ID is supported by SharePoint.
- Click OK, and then click Finish. | https://docs.jivesoftware.com/cloud_int/comm_mgr/jive.help.jiveforsharepointV5/Admin/SendClaimsUsingCustomFile.html | 2021-06-12T17:13:30 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.jivesoftware.com |
Compare a config drift template
Contributors
Download PDF of this page
You can compare the system and cluster configurations and detect the configuration deviations in near real time.
Steps
From the left pane, click Config Drift.
Select one of the existing templates or click Add Template to add a new template.
Generate a config drift report
You can generate a report immediately or you can schedule the report to be generated on a weekly or monthly basis.
An email is sent with the details of the configuration deviation between the selected systems. | https://docs.netapp.com/us-en/active-iq/task_compare_config_drift_template.html | 2021-06-12T18:59:22 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.netapp.com |
Access Methods¶
Traditionally, Spamhaus data are consumed through DNS lookups, using a well-established access method that has been a standard for two decades.
However, while this standard is well-received and implemented throughout the e-mail industry, other types of usage require different media and protocols.
In this section, we’re going to describe the various access methods available, which datasets are available through them and how.
Data Query Service | https://docs.spamhaus.com/datasets/docs/source/70-access-methods/index.html | 2021-06-12T16:36:11 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.spamhaus.com |
The Billboard Renderer renders Billboard Assets. Billboards are a level-of-detail (LOD) method for drawing complicated 3D Meshes in a simpler way when they are far away from the CameraA component which creates an image of a particular viewpoint in your scene. The output is either drawn to the screen or captured as a texture. More info
See in Glossary. When is far away from the Camera, its size on screen means there is no need to draw it in full detail. Instead, you can replace the complex 3D Mesh with a 2D billboard representation.
Certain features, such as SpeedTree, export Billboard Assets, but you can also create them yourself. For information on how to create a Billboard Asset, see the BillboardAssets Manual page and the BillboardAsset Script reference page.
Properties on this component are split into the following sections:
This section contains general properties in the root of the component.
The Lighting section contains properties that specify how this Billboard Renderer interacts with lighting in Unity.
The Probes section contains properties relating to Light Probes and Reflection Probes.
This section contains additional renderingThe process of drawing graphics to the screen (or to a render texture). By default, the main camera in Unity renders its view to the screen. More info
See in Glossary properties. | https://docs.unity3d.com/2019.2/Documentation/Manual/class-BillboardRenderer.html | 2021-06-12T19:01:44 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.unity3d.com |
Endaoment is a new Community Foundation & public charity offering Donor-Advised Funds (DAFs) built atop the Ethereum blockchain, allowing you to donate to almost any U.S. nonprofit organization.
Our mission is to encourage and manage the charitable giving of cryptocurrency.
We're a California Nonprofit Public Benefit Corporation headquartered in San Francisco, federally tax-exempt under IRS revenue code section 501(c)(3). All donations to Endaoment or Endaoment DAFs are tax-deductible to the fullest extent of the law.
Endaoment is incubated by Framework Labs, a leader in the decentralized finance industry and sister company of investment firm Framework Ventures. Our on-chain smart contracts have been audited by OpenZeppelin and our application interfaces have been built to leverage services and tools native to the DeFi industry. We'll be rolling out more features down the road that double-down on making a Community Foundation built for the DeFi community.
You can create a DAF on the Ethereum Mainnet at via our app, test out our latest build on the Ropsten test network, or join us on Discord if you have any questions! | https://docs.endaoment.org/ | 2021-06-12T16:32:16 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.endaoment.org |
minion id¶
Each minion needs a unique identifier. By default when a minion starts for the
first time it chooses its FQDN as that
identifier. The minion id can be overridden via the minion's
id
configuration setting.
Tip
minion id and minion keys
The minion id is used to generate the minion's public/private keys and if it ever changes the master must then accept the new key as though the minion was a new host.
The default matching that Salt utilizes is
shell-style globbing around the minion id. This also works for states
in the top file.
Note
You must wrap salt calls that use globbing in single-quotes to prevent the shell from expanding the globs before Salt is invoked.
Match all minions:
salt '*' test.version
Match all minions in the example.net domain or any of the example domains:
salt '*.example.net' test.version salt '*.example.*' test.version
Match all the
webN minions in the example.net domain (
web1.example.net,
web2.example.net …
webN.example.net):
salt 'web?.example.net' test.version
Match the
web1 through
web5 minions:
salt 'web[1-5]' test.version
Match the
web1 and
web3 minions:
salt 'web[1,3]' test.version
Match the
web-x,
web-y, and
web-z minions:
salt 'web-[x-z]' test.version
Note
For additional targeting methods please review the compound matchers documentation.
Minions can be matched using Perl-compatible
regular expressions (which is globbing on steroids and a ton of caffeine).
Match both
web1-prod and
web1-devel minions:
salt -E 'web1-(prod|devel)' test.version
When using regular expressions in a State's top file, you must specify
the matcher as the first option. The following example executes the contents of
webserver.sls on the above-mentioned minions.
base: 'web1-(prod|devel)': - match: pcre - webserver | https://docs.saltproject.io/en/latest/topics/targeting/globbing.html | 2021-06-12T17:14:33 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.saltproject.io |
Which images should we optimize?
Here, you can pick which images to optimize. At the moment, we can optimize your product images and the images in your theme files.
However, we’re not able to optimize the images uploaded through the theme customizer at the moment. Shopify has not provided access to these images to apps.
How aggressively should we compress your images?
Here, you can pick how heavily your images should be compressed.
The more aggressive the compression, the more likely you’ll see a change in quality in the final image. However, even with the ‘aggressive’ setting, you may not notice any differences.
More aggressive compression also means smaller file sizes and thus a faster store.
Should we automatically optimize new images?
We can monitor your store for new images and automatically compress them based on your settings. If this isn’t enabled, new images won’t be automatically compressed. | http://docs.hyperspeed.me/knowledge-base/image-optimization-options/ | 2021-06-12T17:08:57 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.hyperspeed.me |
Error (too many recipients) when you update a meeting in a large Outlook distribution group
Original KB number: 3024804
Symptoms
You send a meeting request to a distribution group that has a large membership. You later try to send an update to the meeting, but you receive a non-delivery report (NDR) that resembles the following:
This message wasn't delivered to anyone because there are too many recipients. The limit is 500. This message has <number of> recipients.
For each listed recipient, you see the following error message:
This message has too many recipients. Please try to resend with fewer recipients.
Under the diagnostic information for administrators, the following error is generated for each listed recipient:
#550 5.5.3 RESOLVER.ADR.RecipLimit; too many recipients ##
When you open the meeting on the calendar, the To field contains a number of the recipients in the distribution group membership, in addition to the distribution group itself.
Cause
This issue may occur if the number of recipients who responded to the earlier meeting request exceeds the recipient limit for messages. Each recipient who sends a response to the meeting request is added to the To field in the updates to the meeting request, in addition to the distribution group. For a large distribution group, the resulting To field may exceed the number of allowed recipients. This triggers the NDR.
Use one of the following methods to prevent meeting updates from being sent with additional recipients on the To line.
Resolution 1: Disable responses for the meeting request
When you send a meeting request to a large distribution group whose membership exceeds the established recipient limit for your organization, turn off responses from recipients:
- In the new meeting window, select Response Options on the ribbon.
- Clear the Request Responses check box.
This prevents responding recipients from being added to the To field on subsequent updates to the meeting. This also prevents the tracking of meeting responses on the meeting's Tracking page.
Note
In some cases, clearing the Request Responses check box may not completely eliminate recipient responses from being returned to the organizer. This behavior may still occur if the recipient accepts the meeting invitation from a device or a non-Outlook client that does not honor the disabled Request Responses setting. This behavior is known to occur with certain third-party implementations of Exchange ActiveSync (EAS).
Resolution 2: Manually delete additional responses before you send the update
When you reopen the meeting to send an update, delete the additional recipient entries from the To field before you click Send Update.
Note
When you use this method, removed recipients may receive a cancellation notice for the original meeting, and the update may be applied as a new meeting on the recipient's calendar.
More information
The
MaxRecipientEnvelopeLimit value in Exchange is where the organization limit for recipients is stored. The default setting in Exchange 2010 and Exchange 2013 is 5000 recipients per message. However, this setting can be configured by your Exchange administrator.
Exchange Online and Office 365 have an organization limit of 500 recipients per message, and this setting can't be modified. Where applicable, this value is configured through the following PowerShell command:
Set-TransportConfig -maxrecipientenvelopelimit: value
The
MaxRecipientEnvelopeLimit parameter specifies the maximum number of recipients for a message. The default value is 5000. The valid input range for this parameter runs from 0 through 2147483647. If you enter a value of Unlimited, no limit is imposed on the number of recipients for a message. Exchange treats an unexpanded distribution group as one recipient. This parameter is available only in on-premises installations of Exchange 2013.
For more information, see Set-TransportConfig. | https://docs.microsoft.com/en-US/exchange/troubleshoot/administration/too-many-recipients-update-meeting-distribution-group | 2021-06-12T18:10:13 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.microsoft.com |
ODF plugfest and OOoCon, Orvieto
I’ve spent the last week in the city of Orvieto, perched atop a hill in Umbria, Italy. Monday and Tuesday I particpated in the second ODF Plugfest, and then Wednesday through Friday I attended OOoCon, the annual OpenOffice.org conference. I gave a presentation on Wednesday about Office’s approach to interoperability with OpenOffice.org, which you can find on the OOoCon presentation page, and you can find the presentations from the plugfest, as well as the test scenarios we went through, on the plugfest web site.
It was great to see everyone I had met at the last plugfest, and I also had the opportunity to finally meet in person many people I’ve only known via email and the ODF TC calls, including Svante Schubert, Charles Schulz, Louis Suarez-Potts, Eike Rathke and others. Everyone was great, and made me feel very welcome.
I was planning to do some sightseeing in Rome this weekend, but there is a train strike that begins at 21:00 today (Saturday), so I’m going to stay right here in Orvieto until Monday, when I’ll fly to Brussels for meetings and preparations for the upcoming DII workshop on Thursday, November 12. If you’d like to see the photos I’ve taken in Orvieto this week, you can find them on Flickr, and I’ve also included thumbnails of a few favorites below.
And now, after a long day of photographing the sights of Orvieto, it’s time to get out and enjoy some local cuisine. Buon appetito! | https://docs.microsoft.com/en-us/archive/blogs/dmahugh/odf-plugfest-and-ooocon-orvieto | 2021-06-12T19:03:23 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.microsoft.com |
Upgrade
Please follow Binance Chain Telegram Announcement Channel or forum to get the latest news about upcoming upgrades.
Upgrading Full Node
Many of Binance Chain upgrades are hardfork ones. If so, you have to finish the upgrade steps before the hardfork block height.
- If your node is already synced with the network, please download the new binary and replace the previous version
- Replace the config.toml and app.toml under home folder with the latest versions. You can customize those parameters.
- Stop the bnbchaind process and restart it with the new one.
bnbchaind start --home <home-path>
Forget to Upgrade
The Binance Chain has a hardfork upgrade and if you failed to upgrade your fullnode to the latest version,
bnbchaind process will stop and even if you restart with the latest version, the following error will appear:
panic: Tendermint state.AppHash does not match AppHash after replay. Got , expected 393887B67F69B19CAB5C48FB87B4966018ABA893FB3FFD241C0A94D2C8668DD2 goroutine 1 [running]: github.com/binance-chain/node/vendor/github.com/tendermint/tendermint/consensus.checkAppHash(0xa, 0x0, 0xc000bd8c56, 0x6, 0xc000b247c0, 0x12, 0x14e7bf9, 0x8592eb, 0xc000b247e0, 0x20, ...) /Users/huangsuyu/go/src/github.com/binance-chain/node/vendor/github.com/tendermint/tendermint/consensus/replay.go:464 +0x213 github.com/binance-chain/node/vendor/github.com/tendermint/tendermint/consensus.(*Handshaker).ReplayBlocks(0xc000b37980, 0xa, 0x0, 0xc000bd8c56, 0x6, 0xc000b247c0, 0x12, 0x14e7bf9, 0x8592eb, 0xc000b247e0, ...)
To recover from the
state conflict error, you need to:
Backup your home directory, (default is ~/.bnbchaind)
Download the tool: state-recover
Get the height of upgrade, this height will be announced in the upgrade announcement on the forum. For example, if it's announced as 5000 in the forum and run the following command will make your full node recover to the last block before the upgrade, and that is 4999 :
./state_recover 4999 <your_home_path>
Restart with the latest version of
bnbchaind
bnbchaind start & | https://docs.binance.org/guides/node/upgrade.html | 2021-06-12T18:05:37 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.binance.org |
Control.
On Manipulation Starting(ManipulationStartingRoutedEventArgs) Method
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Called before the ManipulationStarting event occurs.
Equivalent WinUI method: Microsoft.UI.Xaml.Controls.Control.OnManipulationStarting.
protected: virtual void OnManipulationStarting(ManipulationStartingRoutedEventArgs ^ e) = OnManipulationStarting;
void OnManipulationStarting(ManipulationStartingRoutedEventArgs const& e);
protected virtual void OnManipulationStarting(ManipulationStartingRoutedEventArgs e);
function onManipulationStarting(e)
Protected Overridable Sub OnManipulationStarting (e As ManipulationStartingRoutedEventArgs)
Parameters
Event data for the event.
Remarks. | https://docs.microsoft.com/it-it/uwp/api/windows.ui.xaml.controls.control.onmanipulationstarting?view=winrt-19041 | 2021-06-12T19:00:15 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.microsoft.com |
New in version 2018.3.0.
Manage S3 resources. Be aware that this interacts with Amazon's services, and so may incur charges.
This module uses
boto3, which can be installed via package, or pip.:
s3.keyid: GKTADJGHEIQSXMKKRBJ08H s s3 object exists: boto_s3.object_present: - name: s3-bucket/s3-key - source: /path/to/local/file - region: us-east-1 - keyid: GKTADJGHEIQSXMKKRBJ08H - key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs - profile: my-profile
boto3
salt.states.boto_s3.
object_present(name, source=None, hash_type=None, extra_args=None, extra_args_from_pillar='boto_s3_object_extra_args', region=None, key=None, keyid=None, profile=None)¶
Ensure object exists in S3.
The name of the state definition. This will be used to determine the location of the object in S3, by splitting on the first slash and using the first part as the bucket name and the remainder as the S3 key.
The source file to upload to S3, currently this only supports files hosted on the minion's local file system (starting with /).
Hash algorithm to use to check that the object contents are correct. Defaults to the value of the hash_type config option.
A dictionary of extra arguments to use when uploading the file. Note that these are only enforced if new objects are uploaded, and not modified on existing objects. The supported args are those in the ALLOWED_UPLOAD_ARGS list at. However, Note that the 'ACL', 'GrantFullControl', 'GrantRead', 'GrantReadACP', and 'GrantWriteACL' keys are currently not supported.
Name of pillar dict that contains extra arguments. Extra arguments defined for this specific state will be merged over those from the pillar.
Region to connect to.
Secret key to be used.
Access key to be used.
A dict with region, key and keyid, or a pillar key (string) that contains a dict with region, key and keyid. | https://docs.saltproject.io/en/latest/ref/states/all/salt.states.boto_s3.html | 2021-06-12T17:25:57 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.saltproject.io |
This topic identifies example network configurations and then describes two sample IP configuration exercises. The first example illustrates a typical case of a database application dependent upon a single IP resource and configured on a pre-existing subnet. The second example illustrates an active/active scenario where multiple IP resources are configured.
Network Configuration
The first two configuration examples assume the network configuration diagrammed in the following figure.
The network configuration has these components:
- Servers. The configuration has two servers, Server 1 and Server 2, each with the appropriate LifeKeeper and application software installed.
- Interfaces. Each server has two Ethernet interfaces, eth0 and eth1, configured as follows:
- Network. The network consists of three subnetworks:
º Low traffic backbone (25.0.3) primarily for servers
º High traffic backbone (25.0.1) with both servers and clients
º High traffic client network (25.0.2.)
A gateway provides interconnection routing between all LANs. A Domain Name Server (not shown) is used for address resolution.
- Heartbeat. TCP heartbeat communication paths would be configured using either or both of the server subnetworks.
Typical Configuration Example
Server 1 and Server 2 have access to an application called mydatabase that resides on a shared disk. To ensure that the application mydatabase and the IP resources used to access it are switched together, the system administrator creates a mydatabase application resource and adds the IP resource to the application hierarchy as a dependency.
These are the configuration issues:
- Application hierarchy. The application hierarchy must exist before the administrator names it as a parent of the IP resource. For the purposes of this example, Server 1 is the primary server. The application resource tags are mydatabase-on-server1 and mydatabase-on-server2.
- IP resource name. The administrator adds the name and address of the IP resource to the /etc/hosts file on both Server 1 and Server 2 and to the DNS database. In this example, the IP resource name is databaseip and its network address is 25.0.1.2. If no name-to-IP address association is necessary, then this is not required.
- Routers, gateways, and users. Because databaseip is an address on an existing subnet, no additional configuration is necessary. The IP resource is on the 25.0.1 subnet. All users connect to databaseip via the route they currently use to get to the 25.0.1 subnet. For example, users on 25.0.2 go through the gateway and users on 25.0.1 connect directly.
- IP instance definition. When the administrator enters databaseip as the IP resource on the Resource Hierarchy Create screen, the software performs several tests. It verifies that Server 1 can determine the address that goes with databaseip (it is in the hosts file and/or can be retrieved from the DNS). It also verifies that the address retrieved, address 25.0.1.2, is not already in use. Since the IP resource is on the 25.0.1 subnet, the IP Recovery software will ensure that it is configured on the eth1 interface. If the IP resource is acceptable, the software fills in the remainder of the wizard dialog boxes with default values, as shown in the table below Figure 3. If you selected all the default values, an independent IP resource hierarchy called ip-databaseip would be created.
Note: The tables associated with each configuration illustration provide examples of the appropriate information that would be entered in the Create Resource Hierarchy wizard for the primary server (Server 1) and Extend Resource Hierarchy wizard for the backup server (Server 2). For additional details on what information should be entered into the wizards, refer to the LifeKeeper Configuration Tasks section later in this section. These tables can be a helpful reference when configuring your recovery kit.
Figure 3. Typical Configuration Example of IP Resource Creation
Configuration Notes:
- The application resource is mydatabase-on-server1.
- The IP resource is databaseip with a tag name of ip-databaseip.
- If mydatabase-on-server1 fails, LifeKeeper switches it to Server 2; (ip-databaseip is only switched if a dependency exists).
- If Server 1 fails, both resources are brought in-service on Server 2.
- During a switchover, databaseip users would be disconnected. When they log back in, they can access any applications on Server 2.
- During a manual switchover, users connected to Server 1 via connections other than databaseip remain connected to Server 1..
Test Your IP Resource
To verify the successful creation of the IP resource, the administrator should perform the following tasks:
- From the LifeKeeper GUI, observe whether ip-databaseip is in-service (ISP) on Server 1.
- From a remote server, connect to address databaseip using ping or telnet.
- Test manual switchover by selecting the in_service option on Server 2 and selecting ip-databaseip. Verify that the IP address migrates to Server 2.
Active/Active Configuration Example
The second example, using the same network configuration, describes two IP resources, one active on each server.
Resource Addresses
For this example, the IP resources are server1ip (address 25.0.6.20) and server2ip (address 25.0.6.21). Entries for these resources must be in the /etc/hosts files on each server and in the DNS database.
Router Configuration
Because the selected addresses are on a new (logical) subnet, they can be configured for either eth0 or eth1. However, both must go on the same interface.
For this example, choosing eth0 means that all users would have to go through the gateway. Choosing eth1 would allow the users on the 25.0.1 subnet to access the resources directly (assuming that the new subnet had been added to their internal routing tables). Users on subnet 25.0.2 would still require the gateway. For the purposes of this example, the selected interface is eth1.
Regardless of which physical network is chosen to support the new subnet, the network administrator would have to add routing information to the gateway system before creating the IP resources.
First IP Resource Definition
The administrator creates the first IP resource on Server 1. eth0 is the first available interface on each server and would appear as the default. To define eth1 as the interface, the administrator selects it from the list of available interfaces..
Second IP resource definition
The administrator creates the second IP resource on Server 2. eth0 is the first available interface on each server and would appear as the default. To define eth1 as the interface, the administrator selects it from the list of available interfaces.
Creating an IP resource hierarchy on Server 2:
Note: See the topic Guidelines for Creating an IP Dependency before extending an IP resource to a backup server.
Extending an IP resource hierarchy to Server 1:
Note: The actual IP address associated with the DNS name is displayed in the Extend Wizard as the IP resource.
Testing IP Resources
The administrator should verify that the new resources are functioning on both servers by performing the following tests:
- With each resource on its primary server, verify that each is accessible by using either ping or telnet. The administrator may also want to test connectivity from all user sites.
- Test switchover by manually bringing ip-server1ip into service on Server 2. Verify both resources are functional on Server 2.
- Bring both resources into service on Server 1. Verify both resources are functional on Server 1.
- Bring ip-server2ip back into service on its primary server, Server 2.
このトピックへフィードバック | https://docs.us.sios.com/spslinux/9.3.2/ja/topic/ip-recovery-kit-configuration-examples | 2021-06-12T18:01:39 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.us.sios.com |
GetDescriptor method of the CIM_USBDevice class
The GetDescriptor method returns the USB device descriptor as specified by the input parameters. GetDescriptor( [in] uint8 RequestType, [in] uint16 RequestValue, [in] uint16 RequestIndex, [in, out] uint16 RequestLength, [out] uint8 Buffer[] );
Parameters
RequestType [in]
Bit-mapped identifier for the type of descriptor request and the recipient. Refer to the USB specification for the appropriate values for each bit.
RequestValue [in]
Contains the descriptor type in the high byte and the descriptor index (for example, index or offset into the descriptor array) in the low byte. For more information, see the USB specification.
RequestIndex [in]
Specifies the 2-byte language identifier code used by the USB device when returning string descriptor data. The parameter is typically 0 (zero) for nonstring descriptors. For more information, see the USB specification.
RequestLength [in, out]
On input, length (in octets) of the descriptor that should be returned. If this value is less than the actual length of the descriptor, only the requested length is returned. If it is more than the actual length, the actual length is returned.
On output, the length (in octets) of the buffer being returned. If the requested descriptor does not exist, the contents of this parameter are undefined.
Buffer [out]
Returns the requested descriptor information. If the descriptor does not exist, the contents of this parameter are undefined.
Return value
Returns a value of 0 (zero) if the USB descriptor is successfully returned, 1 (one) if the request is not supported, and any other number to indicate an error. In a subclass, the set of possible return codes could be specified by using a ValueMap qualifier on the method. The strings to which the mofqualifier contents are translated can also be specified in the subclass as a Values array qualifier.. | https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/getdescriptor-method-in-class-cim-usbdevice?redirectedfrom=MSDN | 2019-11-12T04:17:42 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.microsoft.com |
Are you ready for energy savings? Enable our cloud-based energy analytics and dashboard for your Iskraemeco Mx382 GPRS meters, to discover, measure and verify savings in the premises you manage.
Iskraemeco are a world leading electric metering manufacturer providing smart metering technologies to utility companies and governments worldwide. Data integration happens via a centralised head-end system (HES) provided by Wattics partner Meterix, which are officially licenced and supported by Iskraemeco.
NOTE: It is assumed that your Iskraemeco Mx382 GPRS meters are already wired and collect energy use data. If not, read and understand Iskraemeco manuals for installing, operating, or maintaining your meters. Installation and program procedures must be carried out and inspected by qualified personnel. Qualified personnel are those who, based on their training and experience, are capable of identifying risks and avoiding potential hazards when working with this product.
Step 1: Register your meter and data point with Wattics
You will need to provide us with:
- Serial number of your GPRS meter, which you can find at the front of the meter
- Data service you want:
– Sampling frequency: 5mn, 15mn, hourly readings
– Time of delivery: as soon as sampled, hourly, daily
- Country where your GPRS meter is located, so we can assess the GSM data service pricing.
You will need a new (free) SIM card for your meter to enable data connectivity to Wattics. A remote programming device will also be sent/loaned to complete the meter configuration for the new SIM – including a €70 fee and refundable deposit. A member of our team will get back to you regarding total pricing (including GSM data service and monthly software subscription).
Step 2: Configure your meter with the new SIM card
The meter needs programming with the correct APN settings for the new SIM to work. The remote programming device will be used by our team for that purpose, and immediate checks will be done to confirm data upload to Wattics is setup correctly.
Once GPRS data collection is enabled, we will enable your dashboard and will give you access to get started!
You don’t have yet an Iskraemeco Mx382 GPRS meter?
Contact us if you would like to receive technical and pricing information. The 3-phase meter measures and pushes electrical readings to Wattics directly over GPRS, making it a simple and cost-effective solution for your energy efficiency projects. | https://docs.wattics.com/2017/03/15/connect-your-iskraemeco-mx382-gprs-meters-to-wattics/ | 2019-11-12T03:56:28 | CC-MAIN-2019-47 | 1573496664567.4 | [array(['/wp-content/uploads/2017/03/Mx382ToWattics.jpg',
'Rainforest Eagle Energy Gateway'], dtype=object)
array(['/wp-content/uploads/2017/03/Mx382-SN.png',
'Iskraemeco Mx382 Serial Number'], dtype=object)
array(['/wp-content/uploads/2016/02/dash-screen.jpg', 'Mx382 Data Push'],
dtype=object) ] | docs.wattics.com |
Prev
Part IV. Configuring Evergreen for your workstation
Chapter 9. Setting search defaults
Go to Administration → Workstation.
Use the dropdown menu to select an appropriate
Default Search Library
. The default search library setting determines what library is searched from the advanced search screen and portal page by default. You can override this setting when you are actually searching by selecting a different library. One recommendation is to set the search library to the highest point you would normally want to search.
Use the dropdown menu to select an appropriate
Preferred Library
. The preferred library is used to show copies and electronic resource URIs regardless of the library searched. One recommendation is to set this to your home library so that local copies show up first in search results.
Use the dropdown menu to select an appropriate
Advanced Search Default Pane
. Advanced search has secondary panes for Numeric and MARC Expert searching. You can change which one is loaded by default when opening a new catalog window here.
Prev
Up
Part IV. Configuring Evergreen for your workstation
Chapter 10. Turning off sounds
Give Feedback
about this page. You can also join the
Documentation Interest Group
.
© 2008-2017
GPLS
and others. The
Evergreen Project is a member
of the
Software Freedom Conservancy
. | http://docs.evergreen-ils.org/reorg/3.1/cataloging/_setting_search_defaults.html | 2019-02-15T22:24:03 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.evergreen-ils.org |
Expression prefixes for numeric data other than date and time
In addition to configuring expressions that operate on time, you can configure expressions for the following types of numeric data:
The length of HTTP requests, the number of HTTP headers in a request, and so on.
For more information, see Expressions for numeric HTTP payload data other than dates.
IP and MAC addresses.
For more information, see Expressions for IP addresses and IP subnets.
Client and server data in regard to interface IDs and transaction throughput rate.
For more information, see Expressions for numeric client and server data.
Numeric data in client certificates other than dates.
For information on these prefixes, including the number of days until certificate expiration and the encryption key size, see Prefixes for numeric data in SSL certificates. | https://docs.citrix.com/en-us/citrix-adc/12-1/appexpert/policies-and-expressions/ns-pi-adv-exp-work-date-time-num-wrapper-con/ns-pi-exp-prefix-numeric-data-date-time-con.html | 2019-02-15T22:31:49 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.citrix.com |
For cPanel & WHM 11.46
Overview
The Branding feature allows you to change the look of your users’ cPanel interfaces by replacing the default images with your own. You may do this using the WHM interface, or by manually placing the images in the correct directories. If you choose to manually upload the images, read the instructions on the cPanel Branding page first.
Branding options
The interface you will use during the branding process will depend on which link you click.
Note the following:
- Older themes such as x and x2 offer you two options:
- You may click Live Editor to access a branding tool based on the theme’s interface.
You may click Legacy Editor to access the WHM Branding page.
Warning
The x and x2 themes are deprecated and should not be used unless absolutely necessary.
- Newer themes such as x3 and x3mail allow you to access the newer cPanel Branding Editor.
Notes:
If you have disallowed root or reseller logins to cPanel user accounts, the Live Editor link will not work. To re-enable the Live Editor, select one of the following options at Tweak Settings > System > Accounts that can access a cPanel user account:
- Root, Account-Owner, and cPanel User — To allow root and the reseller to access the cPanel account
- Account-Owner and cPanel User Only — To allow the reseller to access the cPanel account | https://docs.cpanel.net/display/1146Docs/About+Branding | 2019-02-15T21:19:25 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.cpanel.net |
Diagrams can be exported to various image formats. dbForge Fusion for SQL Server supports the following formats for diagram export: bmp, jpg, gif, png, tif, and emf.
To export a diagram to an image, perform the following steps:
Bitmap generating engine needs contiguous memory area for the bitmap. If the diagram is large, it is not always possible even if you have enough memory because of memory fragmentation. When such a problem occurs, you will get the following error: The error occurred while diagram exporting. Probably the image is oversized. Try export to another format. This error occurs when exporting diagrams of about 10 000 x 10 000 pixels size and sometimes even with a smaller diagram.
What can be done: | https://docs.devart.com/fusion-for-sql-server/database-designer/exporting-diagram.html | 2019-02-15T21:20:28 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.devart.com |
.symfix (Set Symbol Store Path)
The .symfix command automatically sets the symbol path to point to the Microsoft symbol store.
.symfix[+] [LocalSymbolCache]
Parameters
+
Causes the Microsoft symbol store path to be appended to the existing symbol path. If this is not included, the existing symbol path is replaced.
LocalSymbolCache
Specifies the directory to be used as a local symbol cache. If this directory does not exist, it will be created when the symbol server begins copying files. If LocalSymbolCache is omitted, the sym subdirectory of the debugger installation directory will be used.
Environment
Additional Information
For details, see Using Symbol Servers and Symbol Stores.
Remarks
The following example shows how to use .symfix to set a new symbol path that points to the Microsoft symbol store.
3: kd> .symfix c:\myCache 3: kd> .sympath Symbol search path is: srv* Expanded Symbol search path is: cache*c:\myCache;SRV*
The following example shows how to use .symfix+ to append the existing symbol path with a path that points to the Microsoft symbol store.
3: kd> .sympath Symbol search path is: c:\someSymbols Expanded Symbol search path is: c:\somesymbols 3: kd> .symfix+ c:\myCache 3: kd> .sympath Symbol search path is: c:\someSymbols;srv* Expanded Symbol search path is: c:\somesymbols;cache*c:\myCache;SRV*
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/-symfix--set-symbol-store-path- | 2019-02-15T21:33:01 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.microsoft.com |
OpenSSL
The openssl tool can generate RSA 512 BIT key DH 512 bits keys, benchmark, cipherlist and engine. But like the preference panel, please remember that it s still in heavy beta test and in the future will be transformed in a full featured gui for your open ssl maintenance & operation. Actually you can save a ssl session or stop all if you think it took too much time for the speed test. For best results the benchmark must stay in focus (frontmost) to prevent it from going idle. There is also a manual of the last openssl. You can also send a mail of the session.
Save : Save the actual output
Print : Print the actual output
Mail : Mail the actual output
Manual : Launch the OpenSSL manual
Stop : Stop all activities of the OpenSSL Window | https://docs.rbcafe.com/cryptix/files/002-003-openssl.html | 2019-02-15T21:12:26 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['../gfx/cryptix-ic-small.png', 'Icon'], dtype=object)] | docs.rbcafe.com |
How have a separate license. Ingested metrics data draws from the same license quota as event data.
The Splunk Enterprise trial license
When you first install a downloaded copy of Splunk Enterprise, the installed instance uses a 60 day trial license. This license allows you to try out all of the features in Splunk Enterprise for 60 days, and to index up to 500 MB of data per day.
If you want to continue using Splunk Enterprise features after the 60 day trial expires, you must purchase an Enterprise license. Contact a Splunk sales rep to learn more. See Types of Splunk licenses for information on Enterprise licenses.
If you do not install an Enterprise license after the 60 day trial expires, you can switch to Splunk Free. Splunk Free includes a have previously configured will no longer run once you switch to Splunk Free.
This documentation applies to the following versions of Splunk® Enterprise:! | https://docs.splunk.com/Documentation/Splunk/7.0.0/Admin/HowSplunklicensingworks | 2019-02-15T21:27:49 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
One plugins.
What’s domain mapping?
As the name suggests, domain mapping is the ability offered by WP Ultimo to take in a request for a custom domain and map that request to the correspondent site in the network with that particular domain attached.
How to setup domain mapping on your WP Ultimo Network
Domain mapping requires some setting up on your part to work. Thankfully, WP Ultimo automates the hard work for you so you can easily meet the requirements.
Testing the setup Using the WP Ultimo Wizard
One of the simplest ways to get up and running is to go to any WP Ultimo page inside the dashboard and search for the Help tab at the top.
Help tab on a WP Ultimo admin pageClick on the Help tab and after it opens up, search for the Setup Wizard button.
Clicking that button should take you right to the Setup Wizard, the same screen you most likely visited right after you first installed and activated WP Ultimo. On the Wizard, click Skip Step until you reach the System
WP Ultimo Wizard automates part of the setup
Follow the instructions on that screen exactly as described.
Important: WP Ultimo might have problems copying the sunrise.php file into your wp-content folder automatically. If that happens to you, you might need to manually copy the sunrise.php file from inside the wp-ultimo folder to your wp-content folder via FTP.
Important 2: Note that you need to place the define(‘SUNRISE’, true); line ABOVE the * That’s all, stop editing! Happy blogging. * line on your wp-config.php. If for whatever reason that line does not exist on your wp-config.php file, add the define(‘SUNRISE’, true); line after the first line of your wp-config.php file.
After following all the steps, click the Check Configuration button, and WP Ultimo will run a system diagnosis to see if everything is set on your network.
You need to get green OKs on the Sunrise.php file on the wp-content directory and Sunrise constant set to true items.
Mapping Domains
Now that your network is ready to handle domain mapping we are ready to start mapping some domains! And there are two main ways of doing that: Adding Domain Mappings yourself (as a super admin) – this is useful if you want to add mappings to non-client sites inside your network; or letting your clients map their own domains to their sites.
In both cases, you’ll need to turn on the mapping domain functionality inside WP Ultimo. You can do that by going to WP Ultimo Settings > Domain Mapping and SSL > Enable Domain Mapping and saving.
In both cases you’ll also need to make sure the domain your are planning to map is correctly configured, which is covered right below.
Making sure the domain DNS settings are properly configured
For a mapping to work, you need to make sure the domain you are planning to map is pointing to your Network’s IP address.
To do that, you need to add a A RECORD on your DNS configuration pointing to that IP address. DNS management varies greatly between different domain registrars, but there’s plenty of tutorials online covering that if you search for “Creating A Record on XXXX” where XXXX is your domain registrar (ex.: Creating A record on GoDaddy”).
If you are not sure what’s the IP address of your network, you can use services like Site24x7. Just enter your network’s main domain address on that and it will spit out the IP address.
If you find yourself having trouble getting this to work, contact your domain registrar support and they will be able to help you with this part.
If you plan to allow your clients to map their own domains, they will have to do the work on this part themselves. Point them towards their registrar support system if they find themselves unable to create the A record.
Adding new mappings as a Super Admin
Adding a new mapping to one of your network’s site is pretty simple. First, you need to go to Network Admin > All Sites.
Then, when the Sites list appears, search for the site you want to add a mapping to. Hovering over its table row will make the Edit link visible:
Once in the Edit Site screen, search for the Aliases tab.
That’s it. Once in the Aliases management screen you’ll be able to add, remove, activate and deactivate site mappings!
Letting your users map their own domains
Maybe your clients already have their own business domain and want to attach that to the site they have created in your network. In order for them to be able to do that, you’ll need to allow custom domains on WP Ultimo Settings > Domain Mapping and SSL > Enable Custom Domains.
Enable custom domains
Allowing certain plans to map domains
In WP Ultimo, almost everything is controlled on a plan per plan basis. That means that you can offer the ability of mapping custom domains as a feature only to certain tiers of your service!
To activate support to domain mapping for a certain plan, go to Plans, select the plan in question and tick the Enable custom domain option:
That’s all you need! After that, any client of that particular plan will see an extra meta-box on their Account page:
Your clients will now be able to use that meta-box to map their own domains! Pretty neat, uhh!
Important: As you can see in the screenshot above, WP Ultimo tries to guess your Network IP to display to the user. This is not always accurate, though. You can customize the Network IP address showed there on the Domain Mapping and SSL settings page of WP Ultimo, which will cover along other options in the next topic.
Extra Settings (Advanced)
WP Ultimo offers a number of other different controls you can use to customize the behavior of the domain mapping functionality. Most of those controls are located on the WP Ultimo Settings > Domain Mapping and SSL tab. Let’s see what some of them do.
Domain Mapping Alert Message
This option allows you to customize the alert message your customers will see when they map a new domain using the Custom Domain meta-box:
Change the alert messageThis is what your customers will see, once they click the Set Custom Domain button.
Force Admin Redirect
This option lets you choose the default behavior of a site with a mapped domain attached. Be careful, as changes here can make a client’s site unaccessible.
You can allow your users to access the admin via both the mapped domain and your network domain (which is a safe option, since even if the mapped domain is not correctly configured and the site is not accessible via it, the client will be able to access the admin panel via your network domain).
You can also force access to the admin panel to use your network domain. This is useful if you want to make sure the admin panel is ALWAYS accessible.
Lastly, you can force admin access to take place only over the mapped domain. This is dangerous and is only recommended if you are the one setting up the mappings. If this option is used and the user maps a misconfigured domain, their admin panel will become inaccessible until you, as the super admin, remove the mapping from the network admin panel.
Network IP
Use this option if the IP WP Ultimo guessed for your network does not correspond to the real IP of your network. You can check the real IP of your network by using services like Site24x7. Just enter your network’s main domain address on that and it will spit out the IP address.
WP Ultimo Hosting Support
Our domain mapping will work out-of-the-box with most hosting environments, but some managed hosting platforms like WPEngine, Kinsta, and Cloudways may require the network admin to manually add the mapped domains as additional domains on their platform as well.
We are working closely with the hosting platforms to automate this process so no manual action is required from the admin after a client maps a new domain.
So far, WP Ultimo integrates with:
WP Engine
Works automatically – no additional set-up required.
Closte.com
Works automatically (including auto-SSL) – no additional set-up required.
Cloudways
Works automatically, but requires additional set-up. Read the Tutorial →
CPanel
Works automatically, but requires additional set-up. Read the Tutorial →
RunCloud.io
Works automatically, but requires additional set-up. Read the Tutorial →
ServerPilot.io
Works automatically, but requires additional set-up. Read the Tutorial →
We contacted Kinsta and they don’t have the necessary APIs in place yet, but assured us it is on their road-map. As soon as they implement the necessary tools, WP Ultimo will support them as well.
Troubleshooting
I’m getting a redirect loop when trying to access a site with a mapped domain.
If you’re getting a redirect loop error when trying to access a site with a mapped domain, go to WP Ultimo Settings > Domain Mapping and SSL and disable the Enable Single Sign-on option. | https://docs.wpultimo.com/knowledge-base/getting-started-with-domain-mapping-in-wp-ultimo/ | 2019-02-15T20:46:47 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/Capto_Capture-2018-06-20_09-23-13_AM.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/Capto_Capture-2018-06-20_09-24-39_AM.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/Capto_Annotation.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/E86ADCBE-EE70-4183-9E0C-CFF559C96C1F.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/Capto_Capture-2018-06-20_09-38-35_AM.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/Capto_Capture-2018-06-20_09-40-17_AM.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/E0FD2547-9CC3-48AF-940F-53D892D448B9.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/Capto_Capture-2018-06-20_09-42-15_AM.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/Capto_Capture-2018-06-20_09-57-02_AM.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/Capto_Capture-2018-06-20_10-11-30_AM.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/Capto_Capture-2018-06-20_10-13-20_AM.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/Capto_Capture-2018-06-20_10-19-16_AM.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/3E57E1F3-8C62-45D7-9C0C-54FFE0C90687.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/Screen-Shot-2018-06-20-at-10.41.10.png',
None], dtype=object)
array(['https://docs.wpultimo.com/wp-content/uploads/sites/47239/2018/06/Screen-Shot-2018-06-20-at-10.41.00.png',
None], dtype=object) ] | docs.wpultimo.com |
Parking Spot
Description
A parking spot is an area well delimited where one vehicle can be parked. The
aim of this entity type is to monitor the status of parking spots individually.
Thus, an entity of type
ParkingSpot cannot exist without a containing entity
of type (
OnStreetParking,
OffStreetParking). A parking spot might belong to
one group.
Data Model
The data model is defined as shown below:
id: Entity's unique identifier.
type: Entity type. It must be equal to
ParkingSpot.
source: A sequence of characters giving the source of the entity data.
- Attribute type: Text or URL
- Optional
dataProvider: Specifies the URL to information about the provider of this information
- Attribute type: URL
- Optional
dateCreated: Entity creation date.
dataProvider: Specifies the URL to information about the provider of this information
- Attribute type: URL
- Optional
dateCreated: Entity creation date.
dateModified: Last update timestamp of this entity.
name: Name of this parking spot. It can denote the number or label used to identify it within a parking site.
- Normative References:
- Optional
description: Description about the parking spot.
- Normative References:
- Optional
location: Geolocation of the parking spot, represented by a GeoJSON Point.
- Attribute type:
geo:json.
- Normative References:
- Mandatory. Not nullable (if
addressis not defined).
address: Registered parking spot civic address.
- Normative References:
- Mandatory. Not nullable (if
locationis not defined).
status: Status of the parking spot from the point of view of occupancy.
- Attribute type: Text
- Allowed Values: one Of (
occupied,
free,
closed,
unknown)
- Metadata:
TimeInstant: Timestamp saved by FIWARE's IoT Agents. Note: This attribute has not been harmonized to keep backwards compatibility with current FIWARE reference implementations.
- Mandatory
width: Width of the parking spot.
length: Length of the parking spot.
refParkingGroup: Group to which the parking spot belongs to. For model simplification purposes only one group is allowed per parking spot.
- Attribute type: Reference to an entity of type
ParkingGroup.
- Optional
refParkingSite: Parking site to which the parking spot belongs to.
- Attribute type: Reference to an entity of type
OnStreetParkingor type
OffStreetParking, depending on the value of the
categoryattribute.
- Mandatory
category: Category(ies) of the parking spot.
TimeInstant: Timestamp saved by FIWARE's IoT Agent. Note: This attribute has not been harmonized to keep backwards compatibility with current FIWARE reference implementations.
refDevice: The device representing the physical sensor used to monitor this parking spot.:daoiz_velarde_1_5:3", "type": "ParkingSpot", "status": { "value": "free", "metadata": { "timestamp": { "type": "DateTime", "value": "2018-09-21T12:00:00" } } }, "category": { "value": ["onstreet"] }, "refParkingSite": { "type": "Relationship", "value": "santander:daoiz_velarde_1_5" }, "name": { "value": "A-13" }, "location": { "type": "geo:json", "value": { "type": "Point", "coordinates": [-3.80356167695194, 43.46296641666926] } } }
key-value pairs Example
Sample uses simplified representation for data consumers
?options=keyValues
{ "id": "santander:daoiz_velarde_1_5:3", "type": "ParkingSpot", "name": "A-13", "location": { "type": "Point", "coordinates": [-3.80356167695194, 43.46296641666926] }, "status": "free", "category": ["onstreet"], "refParkingSite": "santander:daoiz_velarde_1_5" } | https://fiware-datamodels.readthedocs.io/en/latest/Parking/ParkingSpot/doc/spec/index.html | 2019-02-15T21:23:15 | CC-MAIN-2019-09 | 1550247479159.2 | [] | fiware-datamodels.readthedocs.io |
Supported Browsers for Lightning Experience
- Microsoft Edge
- Salesforce supports Microsoft Edge on Windows 10 for Lightning Experience. Note these restrictions.
- The HTML solution editor in Microsoft Edge isn’t supported in Salesforce Knowledge.
- Microsoft Edge isn’t supported for the Developer Console.
- Microsoft Internet Explorer version 11
- The full Salesforce site is supported in Internet Explorer 11 on Windows 8 and 8.1 for touch-enabled laptops with standard keyboard and mouse inputs only. There is no support for mobile devices or tablets where touch is the primary means of interaction. Use the Salesforce1 mobile browser app instead.
- The HTML solution editor in Internet Explorer 11 is not supported in Salesforce Knowledge.
- The Compatibility View feature in Internet Explorer isn’t supported.
- Changing the compatibility parsing mode of the browser, for example, by using the X-UA-Compatibility header, isn’t supported.
- Internet Explorer 11 isn’t supported for the Developer Console.
- Internet Explorer 11 isn’t supported for Lightning Console Apps.
- Drag and drop of files into feed comments isn’t supported in Internet Explorer.
- Mozilla® Firefox®, most recent stable version
- Salesforce makes every effort to test and support the most recent version of Firefox. For configuration recommendations, see Firefox.
- Google Chrome™, most recent stable version
- Chrome applies updates automatically. Salesforce makes every effort to test and support the most recent version. There are no configuration recommendations for Chrome.
Wave Analytics Supported Browsers
Browser support is available for Microsoft Edge, Microsoft Internet Explorer version 11, and the most recent stable versions of Mozilla Firefox and Google Chrome.
Recommendations and Requirements for All Browsers.
The minimum screen resolution required to support all Salesforce features is 1024 x 768. Lower screen resolutions don’t always properly display Salesforce features such as Report Builder and Page Layout Editor.
- For Mac OS users on Apple Safari or Google Chrome, make sure that the system setting Show scroll bars is set to Always.
Some third-party Web browser plug-ins and extensions can interfere with the functionality of Chatter. If you experience malfunctions or inconsistent behavior with Chatter, disable the Web browser's plug-ins and extensions and try again. | https://releasenotes.docs.salesforce.com/en-us/spring17/release-notes/getstart_browsers_sfx.htm | 2019-02-15T22:12:04 | CC-MAIN-2019-09 | 1550247479159.2 | [] | releasenotes.docs.salesforce.com |
How To
How to Create a Multi-Step Workflow
TabsPro Modules provides a facility for building and handling multi-step workflows in your application. Read More…
How to customize Bootstrap
Bootstrap can be customized via .css - the strategy would be to find or write those styles and put them in your portal .css file or even in default .css file. in order not to change the Boostrap styles directly - you don’t want to mess with the library because you’ll lose your changes on upgrades. Instead, you just have to overwrite the styles in your own .css file portal which is built into the DNN and loaded automatically by DNN on the page.
How to add and style other Modules
We have a short video clip posted here which will guide you through the process of adding and styling other modules which will be fit for use in Tabs Pro.
How to add your own icon
In order to upload a .jpg or a .jpg file which contains an image you want to have it as option displayed on the list:
Access the DNN platform with admin account then go to Admin menu option, click on File Management, when the page loads, select on the left sidebar the location where you want to upload the file (we usually recommend in Root in Images folder). Once you get there, click on the “Upload Files” button which is displayed on the right corner under the Search term box, and upload .jpg or .jpg files only.
Then you will find it on the tab you want to add the icon to by selecting Icon (from portal files) option, following the path where you uploaded the file.
How to update Tabs Pro to Font Awesome latest version
Tabs Pro is updated and supports Font Awesome 4.6.3 library but in case you struggle with updating Tabs Pro to Font Awesome 4.6.3.
How to add the same module to multiple tabs
Tabs Pro works client side - meaning it adjusts the DOM so the modules fit into tabs according to settings. To be able to add the same module to multiple tabs we can’t really clone the module multiple times because that would probably break functionality since each module has its own IDs. So perhaps the solution is to integrate the module via iframe, so each instance is isolated, or do some kind of dynamic DOM manipulation where a module is moved around as the tab changes.
How does the export work
The Export does not export the other modules from the page. Only Tabs Pro settings are exported. Normally, people use the Export / Import to deploy changes from their UAT site to live. This is also used implicitly when a page is copied into DNN. This will work as long as there are modules on the page with the same name as on the original page.
How to use anchor tags inside tabs:
If you want to access a specific tab and scroll the page to a div from that tab you have to associate the div id with ‘goto’ variable in query strings and add this script in page header:
<script> $(document).ready(function(){ if (document.location.search) { var queries = {}; $.each(document.location.search.substr(1).split('&'),function(c,q){ var i = q.split('='); queries[i[0].toString()] = i[1].toString(); }); if (queries) { $(window).scrollTop($("#" + queries.goto).offset().top); } } }) </script>
Example: // | https://docs.dnnsharp.com/tabs-pro/how-to.html | 2019-02-15T21:42:36 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.dnnsharp.com |
setMockMethodCallHandler method
Sets a mock callback for intercepting method invocations on this channel.
The given callback will replace the currently registered mock callback for
this channel, if any. To remove the mock handler, pass null as the
handler argument.
Later calls to invokeMethod will result in a successful result, a PlatformException or a MissingPluginException, determined by how the future returned by the mock callback completes. The codec of this channel is used to encode and decode values and errors.
This is intended for testing. Method calls intercepted in this manner are not sent to platform plugins.
The provided
handler must return a
Future that completes with the
return value of the call. The value will be encoded using
MethodCodec.encodeSuccessEnvelope, to act as if platform plugin had
returned that value.
Implementation
void setMockMethodCallHandler(Future<dynamic> handler(MethodCall call)) { BinaryMessages.setMockMessageHandler( name, handler == null ? null : (ByteData message) => _handleAsMethodCall(message, handler), ); } | https://docs.flutter.io/flutter/services/MethodChannel/setMockMethodCallHandler.html | 2019-02-15T20:57:30 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.flutter.io |
Mailsac Docs¶
Welcome to the official documentation for Mailsac, the email service built for developers. If you are new to this documentation we recommend taking a look at the introduction page to get an overview of what the documentation has to offer.
The table of contents is the sidebar and will let you easily access the documentation for your topic of interest. You can also use the search function in the top left corner.
WebSocket/Webhook Examples¶ | https://docs.mailsac.com/en/latest/ | 2019-02-15T20:44:41 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.mailsac.com |
There is currently no text in this page. You can search for this page title in other pages, or search the related logs, but you do not have permission to create this page.
Category:Vip
From PeepSo Docs
Pages in category "Vip"
The following 3 pages are in this category, out of 3 total. | https://docs.peepso.com/wiki/Category:Vip | 2019-02-15T20:49:29 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.peepso.com |
Dask¶
Dask is a flexible library for parallel computing in Python.
Dask is composed of two parts:
- Dynamic task scheduling optimized for computation. This is similar to Airflow, Luigi, Celery, or Make, but optimized for interactive computational workloads.
- “Big Data” collections like parallel arrays, dataframes, and lists that extend common interfaces like NumPy, Pandas, or Python iterators to larger-than-memory or distributed environments. These parallel collections run on top of dynamic task schedulers.
See the dask.distributed documentation (separate website) for more technical information on Dask’s distributed scheduler.
Familiar user interface¶
Dask DataFrame mimics Pandas - documentation
import pandas as pd import dask.dataframe as dd df = pd.read_csv('2015-01-01.csv') df = dd.read_csv('2015-*-*.csv') df.groupby(df.user_id).value.mean() df.groupby(df.user_id).value.mean().compute()
Dask Array mimics NumPy - documentation
import numpy as np import dask.array as da f = h5py.File('myfile.hdf5') f = h5py.File('myfile.hdf5') x = np.array(f['/small-data']) x = da.from_array(f['/big-data'], chunks=(1000, 1000)) x - x.mean(axis=1) x - x.mean(axis=1).compute()
Dask Bag mimics iterators, Toolz, and PySpark - documentation
import dask.bag as db b = db.read_text('2015-*-*.json.gz').map(json.loads) b.pluck('name').frequencies().topk(10, lambda pair: pair[1]).compute()
Dask Delayed mimics for loops and wraps custom code - documentation
from dask import delayed L = [] for fn in filenames: # Use for loops to build up computation data = delayed(load)(fn) # Delay execution of function L.append(delayed(process)(data)) # Build connections between variables result = delayed(summarize)(L) result.compute()
The concurrent.futures interface provides general submission of custom tasks: - documentation
from dask.distributed import Client client = Client('scheduler:port') futures = [] for fn in filenames: future = client.submit(load, fn) futures.append(future) summary = client.submit(summarize, futures) summary.result()
Scales from laptops to clusters¶
Dask is convenient on a laptop. It installs trivially with
conda or
pip and extends the size of convenient datasets from “fits in
memory” to “fits on disk”.
Dask can scale to a cluster of 100s of machines. It is resilient, elastic, data local, and low latency. For more information, see the documentation about the distributed scheduler.
This ease of transition between single-machine to moderate cluster enables users to both start simple and grow when necessary.
Complex Algorithms¶
Dask represents parallel computations with task graphs. These
directed acyclic graphs may have arbitrary structure, which enables both
developers and users the freedom to build sophisticated algorithms and to
handle messy situations not easily managed by the
map/filter/groupby
paradigm common in most data engineering frameworks.
We originally needed this complexity to build complex algorithms for n-dimensional arrays but have found it to be equally valuable when dealing with messy situations in everyday problems.
Index¶
Getting Started
Collections
Dask collections are the main interaction point for users. They look like NumPy and Pandas but generate dask graphs internally. If you are a dask user then you should start here.
Scheduling
Schedulers execute task graphs. Dask currently has two main schedulers: one for local processing using threads or processes; and one for distributed memory clusters.
Diagnosing Performance
Parallel code can be tricky to debug and profile. Dask provides several tools to help make debugging and profiling graph execution easier.
- Understanding Performance
- Visualize task graphs
- Diagnostics (local)
- Diagnostics (distributed)
- Debugging
Graph Internals
Internally, Dask encodes algorithms in a simple format involving Python dicts, tuples, and functions. This graph format can be used in isolation from the dask collections. Working directly with dask graphs is rare, unless you intend to develop new modules with Dask. Even then, dask.delayed is often a better choice. If you are a core developer, then you should start here.
Help & reference
- Development Guidelines
- Changelog
- Configuration
- Presentations On Dask
- Dask Cheat Sheet
- Comparison to Spark
- Opportunistic Caching
- Internal Data Ingestion
- Remote Data
- Citations
- Funding
- Images and Logos
Dask is supported by Anaconda Inc and develops under the BSD 3-clause license. | http://docs.dask.org/en/latest/ | 2019-02-15T22:07:34 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['_images/collections-schedulers.png',
'Dask collections and schedulers'], dtype=object)] | docs.dask.org |
Changing Default Behavior of any API
With Ucommerce comes out of the box, a lot of APIs with default behavior, that works in most cases with most webshops. However not all business are alike and that is why Ucommerce enables you to override the default behavior of all APIs and services avaiable.
Override the 80/20 APIs
The 80/20 APIs can be overriden in various ways:
- Libraries can be overriden by creating a new class and derive from the internal implementation, e.g. to override a
CatalogLibraryAPI you'd derive from
CatalogLibraryInternaland register it in the dependency injection container.
- Context classes can be overriden in the same way or you can make a complete fresh implementation of the interfaces that drives the context classes.
Override TransactionLibrary.GetShippingMethods()
As an example to show you how to achieve this we'll override the
GetShippingMethods() on the
TransactionLibrary.
For the example we'll assume that there are two types of products: "shipped goods" and "donations". Those are two different product definitions in our store:
Now this obviously requires two different shipping methods, since you do not ship a donation. We could choose to restrict the shipping methods on various levels, but in this case product definition will do.
The first thing we'll need to do is create a class and derive
TransactionLibraryInternal. Since it doesn't contain any empty constructors we have to inject the right dependencies and send them to the base class. Fortunately the dependency injection container will figure out which dependencies to pass to the constructor in the first place.
public class ExtendedTransactionLibrary : TransactionLibraryInternal { public ExtendedTransactionLibrary ( ILocalizationContext localizationContext, IClientContext clientContext, ICatalogContext catalogContext, IOrderContext orderContext, CheckoutService checkoutService, IOrderService orderService, IPaymentMethodService defaultPaymentMethodService, IEmailService emailService, CatalogLibraryInternal catalogLibraryInternal ) : base(localizationContext, clientContext, catalogContext, orderContext, checkoutService,orderService, defaultPaymentMethodService,emailService,catalogLibraryInternal) { } }
Now we can focus on the methods that we want to override. Let's assume that we cannot buy products and make donations on the same order. This will make the use case a little easier to deal with.
To make it even easier let's further assume that the donation products has a sku of: "donation".
A third and last assumption is that we have a shipping method of name "donation". We can then easily without fetching too much data look at the context and do appropiate actions:
public override ICollection<ShippingMethod> GetShippingMethods(Country country) { const string donationIdentifier = "donation"; var order = _orderContext.GetBasket(true).PurchaseOrder; //true creates the basket if it doesn't exists. if (order.OrderLines.Any(x => x.Sku == donationIdentifier)) { return ShippingMethod.Find(x => x.Name == donationIdentifier).ToList(); } //We did not find any products in the basket of type donation. //Let the base class handle the default behavior and get all except the donation shipping method. return base.GetShippingMethods(country).Where(x => x.Name != donationIdentifier).ToList(); }
Registering the Overridden Library
All there's left to do now is register the overriden service in the dependency injection container. Please refer to How to register a component in Ucommerce if you're interested in more details about components. | https://docs.ucommerce.net/ucommerce/v7.18/extending-ucommerce/override-default-behavior-in-the-api.html | 2019-02-15T21:22:14 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['images/overrideapi1.png', 'image'], dtype=object)] | docs.ucommerce.net |
Querying Products by Custom Properties in Ucommerce
One of the questions I’ve come across a couple of times with Ucommerce is using the LINQ API to query products by custom properties added to a product definition.
Here’s how:
var q = from product in Product.All() where product.ProductProperties.Where(property => (property.ProductDefinitionField.Name == "MyProperty" && property.Value == "MyPropertyValue") || (property.ProductDefinitionField.Name == "MyOtherProperty" && property.Value == "MyotherPropertyValue")).Count() == 2 && product.ParentProductId == null select product; | https://docs.ucommerce.net/ucommerce/v7.18/querying/query-products-by-custom-properties.html | 2019-02-15T22:15:44 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.ucommerce.net |
This section of the manual explains how to access and complete a judge preference sheet ("pref sheet") for a tournament which uses mutually preferred judging.
If the tournament you are attending uses MPJ, you will see a "Prefs" tab while viewing your entry (once prefs are opened by the tournament):
Contents
Accessing Prefs as a Coach
As a coach, you can access the prefs for all of your entries by going to the Prefs tab, which will appear in red if you have any incomplete pref sheets. If necessary, select the relevant Judge Group from the menu on the sidebar, then, select the pref sheet you want to edit:
Accessing Prefs as a Competitor
In order for you to access your own pref sheet as a competitor, your coach must have first checked the box for "Entries may enter their own prefs" in your School settings.
If they have done so, you will see a list of tournaments with pref sheets in the "Upcoming" section of the sidebar, and then you can select the link for the pref sheet you want to edit:
Filling Out Prefs
The interface for filling out your pref sheet will differ slightly depending on whether the tournament uses tiers or ordinal prefs.
For example, if using tiers, you will select a tier for each judge. Note that "C" is a Constraint, and "S" is a Strike. You can also click the button to export your current pref sheet to a spreadsheet for future reference:
If using tiers, the sidebar will show you how many judges you currently have in each tier, and whether you have the correct number of judges in each:
If using ordinals, you can either give each judge a number (you can use the "Fill Gaps" button at the bottom to help deal with accidental gaps):
Or, you can switch to the "Drag and Drop" mode, and move judges up and down in the order:
To switch back to the previous mode, you can click the button for "Numeric Entry."
When done, make sure to press "Save Prefs" at the bottom.
Duplicating Prefs
If you want to copy prefs from one entry to another, you can use the "Dolly the Sheep" section of the sidebar. This will let you export your pref sheet to a spreadsheet, automatically fill in the sheet based on previous prefs (if using ordinals), copy prefs from another entry, or copy the current sheet over another entry's prefs.
Important Note - Make sure to be careful when copying prefs between one sheet and another - once you confirm the copy, there's no "undo" button. | http://docs.tabroom.com/Prefs | 2017-01-16T14:58:44 | CC-MAIN-2017-04 | 1484560279189.36 | [] | docs.tabroom.com |
Creating Calabash Tests
Creating Calabash Tests
Before writing tests, the proper directory structure needs to be created to house the tests and the supporting files for the tests. Calabash provides some helper commands to create a basic directory structure. These helpers are run at the command line from within the solution directory. For an Android project, the
calabash-android gen command would be executed, as illustrated by the following snippet:
$ calabash-android gen
The executed command automatically creates the directory structure as follows:
Furthermore,
calabash-ios gen creates the features folder, a sample
my_first.feature file, and the folders
step_definitions and
support. The tests can be then written in a plain text editor.
The two subdirectories hold Ruby source code files. The files in the step_definitions hold the step definitions - the code snippets that make up the steps that make up the feature. The support directory holds the source code that is shared amongst the step definitions.
The
calabash-android gen command is the corresponding command for Android projects. It creates the same directory structure for Android projects.
Note:
calabash-ios gen or
calabash-android gen only need to run once. These commands will terminate if they detect an existing features directory. | http://docs.testdroid.com/calabash/creating-calabash-tests/ | 2017-01-16T15:09:22 | CC-MAIN-2017-04 | 1484560279189.36 | [array(['http://docs.testdroid.com/assets/calabash/feature-folder-structure.png',
'Calabash'], dtype=object) ] | docs.testdroid.com |
Registering for a tournament is fairly straightforward. The general steps are:
1) Link your account to your school in Tabroom, or create a new school
2) Add students and judges to your roster
3) Join a circuit
4) Click "Register" next to the tournament, and use the Entries and Judges tabs to enter students and judges from your roster into the tournament
Contents
Prerequisites
Before you can register for a tournament, you must have first have created/linked to a school, added students to your student roster, judges to your judge roster, and ensured your school is in the appropriate circuit. For more information on each of those steps, see the appropriate section in the manual.
IMPORTANT NOTE: If you are a student trying to register for a tournament, you should NOT create a new school just to register yourself. Instead, ask your coach/director for access to your schools Tabroom account, and register yourself that way.
Adding Entries
Once your school has joined a circuit, your Tournaments tab will show you a list of upcoming tournaments that you can register for:
Click the "Register" button next to the tournament to get started. You can also click the red "X" next to a tournament to ignore it so it won't show up on your list. You can then choose "Show Ignored Tournaments" at the bottom if you want to add it back.
Once you click "Register," you may be asked to provide an Adult Contact for tournaments which require it. You can edit this information later on the General tab of your entry:
Next, click the Entries tab and then choose an Event on the sidebar. The number next to each event is the number of entries you currently have in that event.
Use the "Add Entry" box on the right to select a student(s) names, and then click "Add Entry:"
Once you have created entries, you will see them in your list, where you can edit or drop them:
Adding Entries To The Waitlist
If the tournament has a waitlist for an event, you can put students on it by using the box on the sidebar:
Waitlisted entries will then appear in your entry list, where you can edit or drop them. You will be notified by email if the tournament accepts your entries off the waitlist.
Adding Hybrids
At tournaments which allow Hybrids, you can enter them with the "Enter Hybrid Team" option in the sidebar:
Use the dropdown box to select the school you're entering a hybrid with:
Adding Judges
Once you have entered competitors, you can use the Judges tab to enter your judges. The sidebar will show you a list of judge groups you can enter judges in, and will appear in red if you have not entered enough judges to meet your commitment:
Choose a judge group on the right, and then use the Add Judges box in the sidebar to add a judge:
Judges will then appear in your list, where you can edit or drop them:
When adding a judge, you may be asked to provide additional details, such as how many prelim rounds they are entered for, or contact information for them:
Limiting Judge Availability
Once you have added a judge to your entry, you can notify the tournament if they will not be available for certain days/rounds. From your judge entry, click the link under the "Availability" column:
Depending on the tournament, this will let you mark a judge as unavailable for particular rounds or days - click the button in the Available column to toggle between Yes and No:
If fines apply for being under your judging obligation, you will be shown the applicable amount.
Requesting Hired Judging
If a tournament has hired judging available, you can request it from the Judges tab by filling out either the number of judges (usually for IE's) or number of rounds (usually for debate events):
Once you have made a request, it will be visible on the Judges tab, where you can reduce or delete the request if necessary:
If instead the tournament is using a "hiring exchange" where judges can offer rounds themselves, you'll see a notification that there are rounds available for hire, and you can click "Add Hire" to hire them:
For the judge you'd like to hire, fill out the number of rounds you want to hire (up to their maximum rounds available), and click Hire.
If you need to cancel a hired judging request, you can remove it by clicking the judges name in the "Your Hires" section of the sidebar.
Dropping Entries
To drop your entire entry, click the red button in the sidebar - you'll be asked to confirm first:
To drop individual entries or judges from an event, just select that event in the sidebar, and edit your entry from there.
Printing Your Registration
If you'd like a copy of your entire registration, you can use the links in the sidebar under "Printouts" - this will let you print your registration, an invoice, or export a spreadsheet with your entries:
Purchasing Concessions
Some tournaments have items available for purchase in advance, such as parking permits. If available, these will be listed on the "Concessions" tab:
For each concession, enter the quantity needed and press Save Order.
Editing Your Entry
If you need to make changes to an existing entry, you can access it again from your main account Dashboard, under "Existing Tournament Registrations:", your registration is correct, etc. At some tournaments, you will only be able to check-in online if your registration fees have already been paid.
If the tournament is using on-site registration, you will see an option for "Confirm Onsite" next to the tournament on your account dashboard, under your "Existing tournaments registrations:"
You can also go to the "Onsite Confirmation" tab while viewing your schools' entry:
You will then be shown your current entry, including any drops, judges, etc. If you need to make changes, you will have to contact the tournament directly, since the add/drop deadline will have passed:
If (and ONLY if) everything is correct and all people listed on your entry are at the tournament, you can confirm your entry:
You will then be shown a confirmation page, and given a link to download a registration packet (with things like maps, parking directions, etc.), if available.
Requesting Housing
Rarely, a tournament will provide a limited amount of housing for competitors, usually at the homes of the hosting team or other volunteers. When available, a "Housing" tab will appear on your school's Entry. This will show you a list of your competitors, as well as their housing request status - to request housing, you must first set a gender for each student, and then click the "Request" button next to their name for each date they need housing:
Depending on the tournament, you may be approved automatically, or placed on a waitlist - when the request is approved, you will see their status change to "Yes," as well as new options to cancel the request or transfer it to another student:
If you need to transfer your request between students, click the Transfer button and make the swap:
Note that housing is usually only provided for competitors or judges who are high-school age - most tournaments which provide housing do not do so for adults. | http://docs.tabroom.com/Registration | 2017-01-16T14:57:46 | CC-MAIN-2017-04 | 1484560279189.36 | [] | docs.tabroom.com |
Troubleshooting
Reminder: this page is to propose solutions to common problems. If you want to report a problem, please look in the forums first, or use the bug tracker. Do not report a problem here: your question will remain unanswered!
Error messages
- Error: please make sure that index.php is your default document for a directory. If you have just installedphpListand get this message, make sure that the DirectoryIndex setting of your Apache configuration has somewhere index.php index.html, and check that index.php is mentioned before index.html. For other webservers please consult your manual to find how to make index.php the default document for a directory. Alternatively you can delete the file "index.html" in the lists directory of phpList.
- Error: IMAP is not included in your PHP installation, cannot continue. There is a major confusion that has been caused by the PHP developers naming a PHP module the IMAP module, even though it is used for more than just IMAP. phpList needs the IMAP functions in PHP in order to connect to the mailbox that will hold the bounces. The mailbox itself is a POP3 mailbox, or you can configure it to be a local mailbox files, but whatever the situation, the IMAP functions are necessary. IMAP functions in PHP have nothing to do with the actual IMAP protocol (at least not as far asphpListis concerned). You should be able to solve this issue by installing (and compiling) the IMAP module into PHP. If you are on a shared hosting account, you should contact your hosting provider.
- Fatal Error: Cannot connect to database, access denied. Please contact the administrator This error indicates there is something wrong with your database connection details. A Database connection requires four things, and they are very sensitive to errors (just one little typo and it won't work): 1) a database host (the name of the server, in many cases "localhost" works, but not always), 2) a database "user" the name of the user who can connect to this host, 3) a database "password" the password to use for the connection, 4) a database "name" the name of the database to use. If any of these four are incorrect, you get the error. So, it's best to double check your settings, and otherwise ask your ISP why it doesn't work. It's possible, although a bit unlikely, that they made a mistake with their permission settings, but you never know.
-)
- HTTP Error 404: File (or directory) not found. The document you requested is not found. If this error message appears when trying to send a message, it is is probably caused by an incorrect value for "website" in the "configuration page" of the admin backend. If that value is correct, you should also check the config.php file for the paths in $pageroot and $adminpages. If this error occurs with any page you try to load, and if your server is running PHP as a cgi (PHPsuExec), it is possible you are erroneously getting a 404 error instead of a 500 error. Try applying the fix described for a HTTP 500 error.
Warning messages
- Warning: The pageroot in your config does not match the current location. Check your config file. This warning indicates a misconfiguration of the following settings in config.php: $pageroot and $adminpages. This can be fixed by entering the correct path names.
- Warning: In safe mode, not everything will work as expected. It is highly recommended to run phpList with "safe mode off". Much has been done to make phpList work in Safe mode, but once you get to systems with more than 500 users, it is likely to cause problems. Also, in safe mode, the automatic bounce processing of phpList will NOT WORK. If you are on a shared hosting account, you could contact your ISP to fix this issue.
- Warning: open_basedir restrictions are in effect, which may be the cause of the next warning. open_basedir is a security related PHP setting, which will limit the opening of files to directories placed within a specified directory-tree. This warning is often displayed in conjunction with another warning, such as "The attachment repository does not exist or is not writable". In effect, the open_basedir restrictions and related warnings imply that you won't be able to upload files to phpList, like attachments, images, and imports. You can fix this by changing the attachment repository and/or temp directory in config.php to a writable location, like your webroot. You will need to create the new directory on your webserver and grant it read/write permissions.
- Warning: The attachment repository does not exist or is not writable. The "attachment repository" is a directoryphpListneeds for storing the attachments sent with list messages. This problem can be solved by checking in config.php whether an attachment repository has been defined (look for the $attachment: The temporary directory for uploading ( ) is not writable, so import will fail. The "temporary directory" is where phpList stores temporary files, for instance when upgradingphpListor importing users. You can fix this by checking in config.php whether a temporary directory has been defined (look for the $tmp: Things will work better when PHP magic_quotes_gpc = on. The PHP setting magic_quotes_gpc needs to be enabled for the smooth functioning of phpList. There are several possible ways to fix this. First you could check in the /lists/.htaccess file that it includes the line php_flag magic_quotes_gpc on. If not, try adding this line to see whether it fixes the problem. Alternatively, if your server runs PHP as CGI (PHPsuExec), you can try to enable magic_quotes_gpc by creating the file '/lists/php.ini' and adding this directive: magic_quotes_gpc = 1. If you're on shared hosting account, you can also contact your ISP to fix this.
- Warning: Things will work better when PHP magic_quotes_runtime = off. The PHP setting magic_quotes__runtime should preferably be disabled. If you're on shared hosting account, you can contact your ISP to fix this.
- Warning: You are trying to use RSS, but XML is not included in your PHP. phpList can send RSS feeds to users. To use this feature you need XML support in your PHP installation. If you're on shared hosting account, you could contact your ISP to fix this.
- Warning: You are trying to send a remote URL, but PEAR::HTTP/Request is not available, so this will fail. To fetch a webpage and send it to a list of users, phpList needs the PEAR::HTTP/Request module to be installed on your server. If you are on a shared hosting account, you can ask your ISP to install PEAR::HTTP/Request module.
Other error messages
- Sorry not implemented yet. This usually indicates that a file is missing in yourphpListinstallation. Check that all files are correctly installed. And if you typed the url manually check that it's well spelled.
- no input file specified. This is a php error message you get when running PHP as a CGI binary on Apache and indicates you requested a non-existent PHP file. This usually indicates that a file is missing in yourphpListinstallation. Check that all files are correctly installed.
- Error: Your database is out of date, please make sure to upgrade. Phplist will display this error message if you forgot to initialise the database tables after installing or upgradingphpList. If this occurs after a new installation, you can fix this by using the 'initialise database' option in the setup section of the admin module. After an upgrade, make sure you click on the upgrade link that is displayed in the 'System Functions' block.
- Database error 1062 while doing query Duplicate entry '0-51' for key 1. If you get this error message during upgrading, you do not need to worry. The upgrade process involves writing data to the database which will generate these responses. The important thing is that the database upgrade procedure ends with "Information: Success" at the end of the page.
- Database error 1071 while doing query Specified key was too long; max key length is 1000 bytes This error is related to using a database with UTF-8 encoding and is a known limitation of MySQL. For more info and fixes, please consult the issue tracker.
- Blank page. This usually indicates a parse error. Please review the changes you made while editing files like config.php or english.inc. You can check config.php for parse errors by changing the error level setting inf config.php to $error_level = error_reporting(E_PARSE); See also the PHP manual. Alternatively, if you have command line access, you could use the following command to check for parse errors in a specific php file: php /path/toyour/file/lists/admin/file.php
Phpmailer error messages
- Mailer Error: SMTP Error: Could not connect to SMTP host - This is a phpmailer error that might occur if you incorrectly configured the SMTP mail server settings in config.php. Please review your mail server settings in config.php. See also these forum posts: 28861#28861, 39000#39000, 36977.
- Mailer Error: Language string failed to load - For more info see this report If you have enabled SMTP in config.php see this report. [just a few links for now]
- Mailer Error: Could not instantiate mail function - For more info see this report. If you have enabled SMTP in config.php see this report. [just a few links for now]
Related topics
CategoryDocumentation | http://docs.phplist.com/PhplistTroubleshooting.html | 2017-01-16T14:54:39 | CC-MAIN-2017-04 | 1484560279189.36 | [] | docs.phplist.com |
Environment variables let you extend your build configuration. There are several read-only Appcircle variables and you can add your own variables to export during the build process and use in custom build scripts.
Environment variables have a key and a secret value that can be defined manually to be used in your project builds globally.
You can create groups of environment variables and import these groups to your builds to customize your builds with additional parameters. | https://docs.appcircle.io/environment-variables/why-to-use-environment-variables-and-secrets | 2021-07-23T19:43:23 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.appcircle.io |
Processing approval requests
Approval Central is the primary console for the users of BMC Remedy Approval Server.
The following topics provide information about how approvers use Approval Central to process approval requests, how approvers and process administrators specify alternate approvers, and how process administrators carry out approval overrides:
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/ars91/en/processing-approval-requests-609071488.html | 2021-07-23T18:05:26 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.bmc.com |
Installing BMC Service Resolution for TrueSight Infrastructure Management or ProactiveNet
This topic provides instructions to install BMC Service Resolution 3.5. BMC Service Resolution is a solution for the constituent products: Remedy IT Service Management (Remedy ITSM) and TrueSight Infrastructure Management or ProactiveNet.
The following topics are provided:
Before you begin
- Review the Installation process overview.
- Ensure that your environment meets the requirements specified in supported products and versions.
- Open the network ports described in Network ports.
- You must have completed the tasks described in Preparing for installation.
Take a snapshot of your current working system before running the BMC Service Resolution setup.
If you are using a Solaris computer:
Ensure that following environment variables are set:
BMC_PROACTIVENET_HOME=/usr/pw
IBRSD_HOME=/usr/pw/integrations/ibrsd
MCELL_HOME=/usr/pw/server
- Edit the .bmc_profile file and add the following entries:
- IBRSD_HOME="/usr/pw/integrations/ibrsd"
- MCELL_HOME="/usr/pw/server"
- export MCELL_HOME="/usr/pw/server"
- export IBRSD_HOME="/usr/pw/integrations/ibrsd"
Ensure that the default %TEMP% directory has sufficient space for the installation. If not, increase the available space or change the temporary location.
(The installer does not use the InstallAnywhere framework and is not affected by the IATEMPDIR system variable.)
To install BMC Service Resolution for ProactiveNet or TrueSight Infrastructure Management and TrueSight Presentation Server
Note
If you are using ProactiveNet 9.5 SP2, before proceeding with this installation, ensure that you have ProactiveNet 9.5 SP2 with the latest hotfix installed.
Download the installation program from the BMC Electronic Product Download site, or navigate to the installation directory on the CD.
Unzip the installer file:
(Windows) BPPM_BSR_3.5.01_Installer_Windows.zip
(Linux) BPPM_BSR_3.5.01_Installer_Linux.zip using the following command:
gunzip -c BPPM_BSR_3.5.01_Installer_Linux.tar.gz |tar xvf -
Navigate to the Disk 1 folder.
- Install the service pack on the ProactiveNet or TrueSight Infrastructure Management server and TrueSight Presentation Server:
- Start the installer.
(Windows) Run Setup.exe.
(UNIX) Run the chmod -R 755 * command.
Run setup.bin.
Ensure that the non-root user has permissions has r/w permissions to the /etc/bmc.profile file.
- Right-click the Setup application and click Run as Administrator.
- In the Welcome window, click Next.
- In the Installation Review pane, review the list of features and click Install.
After the installation is complete, in the Installation Review Summary window, click View Log to review any severe errors or warnings in the log.
The installation log file BPPM_BSR35_install_log.txt is located in the %TEMP% directory on your computer.
Close the log file.
- To exit the installer, click Done.
- If Transport Layer Security (TLS) is enabled on an Infrastructure Management cell, in the <installation_home>\integrations\ibrsd\conf\ibrsd.dir file, make the following change:
Change:
cell pncell_<TSIM_HOSTNAME> mc <TSIM_HOSTNAME>:1828
To:
cell pncell_<TSIM_HOSTNAME> *TLS <TSIM_HOSTNAME>:1828
To verify the installation
On the TrueSight Infrastructure Management server:
- At the command prompt, run the following command:
pw viewhistory
- Verify that the installed version is 3.5.01
To configure BMC Service Resolution in multiserver deployment model of ProactiveNet
For information about the multiserver deployment model, see Central Monitoring Administration best practices from the ProactiveNet 9.6 online documentation.
When implementing BMC Service Resolution, you must perform the following tasks:
- Install BMC Service Resolution on the Central Server and child servers.
- (Optional) Integrate the Central Server and child servers with Remedy ITSM. See Integrating TrueSight Infrastructure Management or ProactiveNet with BMC Service Desk: Incident Management or Integrating TrueSight Infrastructure Management or ProactiveNet with Remedy OnDemand .
- If you do not want to integrate the Central Server with Remedy ITSM, but you want to receive incident information from the Remedy ITSM server, you must register the Central Server with the Remedy ITSM server.
For details about the flow of information in this deployment model, see Configuring BMC Service Resolution in multi server deployment of BMC ProactiveNet.
Where to go from here
Post installation procedures
What about installing for TrueSight 10.1? Where are those instructions? Where does it tell you to install on both TSIM and TSPS?
The instructions for installing on TrueSight 10.1 are here -> Service Pack 1: 3.5.01. Please check the Installation Overview section on that page.
Hope this helps.
Note: The Installer restart BPPM
Hi Erik paul Gonzalez pizarro, do you mean that the installer restarts BPPM? Could you please clarify this point? Thanks.
Please mention the Note : To disable TLS in ibrsd.dir <installation_home>\integrations\ibrsd\conf\ibrsd.dir before upgrade
As mention in below document section to Disable TLS communication between Infrastructure Management server to Oracle database | https://docs.bmc.com/docs/bsr/35/installing-bmc-service-resolution-for-truesight-infrastructure-management-or-proactivenet-594249666.html | 2021-07-23T19:57:31 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.bmc.com |
Enabling self-service in an organization
Users can leverage the self-service options that are available for them to quickly fix their issues instead of raising a help desk ticket. The self-service options make users more self-reliant by reducing their dependency on the help desk. Also, the self-service options enable users to meet help desk professionals in-person, review the request and service status, and so on.
As an administrator, to make your users self-reliant, you can perform the following tasks:
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/digitalworkplaceadvanced/2002/enabling-self-service-in-an-organization-908222938.html | 2021-07-23T18:30:03 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.bmc.com |
Treasury in Moonbeam¶
Introduction¶
A treasury is an on-chain managed collection of funds. Moonbeam will have a community treasury for supporting network initiatives to further the network. This treasury will be funded by a percentage of transaction fees of the network and will be managed by the Council.
Each Moonbeam-based network will have it's own treasury. In other words, the Moonbase Alpha TestNet, Moonshadow on Westend, Moonriver on Kusama, and Moonbeam on Polkadot will each have their own respective treasury. amount must be paid as the bond if it is higher than the deposit percentage
- Spend period — the amount of days, in blocks, during which the treasury funds as many proposals as possible without exceeding the maximum
- Maximum approved proposals — the maximum amount of proposals that can wait in the spending queue
Community Treasury¶
To fund the Treasury, a percentage of each block's transactions fees will be allocated to it. The remaining percentage of the fees are burned (check needs to be higher than the minimum amount, known as the proposal bond minimum, which can be changed by a governance proposal. So, any token holder that has enough tokens to cover the deposit can submit a proposal. If the proposer doesn't have enough funds to cover the deposit, the extrinsic will fail due to insufficient funds, but transaction fees will still be deducted.
Once a proposal has been submitted, is subject to governance, and the council votes on it. If the proposal gets rejected, the deposit will be lost and transfered to the treasury pot. If approved by the council, the proposal enters a queue to be placed into a spend period. If the spending queue happens to contain the number of maximum approved proposals, the proposal submission will fail similarly to how it would if the proposer's balance is too low.
Once the proposal is in a spend period, the funds will get distributed to the beneficiary and the original deposit will be returned to the proposer. If the treasury runs out of funds, the remaining approved proposals will remain in storage until the next spend period when the Treasury has enough funds again.
The happy path for a treasury proposal is shown in the following diagram:
| https://docs.moonbeam.network/treasury/overview/ | 2021-07-23T18:58:12 | CC-MAIN-2021-31 | 1627046150000.59 | [array(['/images/treasury/treasury-overview-banner.png',
'Treasury Moonbeam Banner'], dtype=object)
array(['/images/treasury/treasury-proposal-roadmap.png',
'Treasury Proposal Happy Path Diagram'], dtype=object)] | docs.moonbeam.network |
The MainText class represents the main text of a TX Text Control document. A TX Text Control document consists of the main text, and additionally other pieces of text such as text frames and headers or footers. The MainText class implements the IFormattedText interface. Differently from the TextControl or WPF.TextControl classes the MainText object's collections of the IFormattedText interface do not depend on the input focus. For example, the TextControl.Tables collection contains the tables of the main text, if the main text has the input focus, but it contains the tables of the text frame, if a text frame has the input focus. The MainText.Tables collection always contains the tables of the main text regardless of the input focus.
public class MainText
Public Class MainText
Introduced: 16.0.
Contact Us Sitemap Imprint Updated on July 16, 2021© 2021 Text Control GmbH | https://docs.textcontrol.com/textcontrol/wpf/ref.txtextcontrol.maintext.htm | 2021-07-23T20:00:45 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.textcontrol.com |
MessageTimestampRouter¶
The following provides usage information for the Confluent SMT
io.confluent.connect.transforms.MessageTimestampRouter.
Description¶
Update the record’s topic field as a function of the original topic value and the record’s timestamp field.
This is useful for sink connectors, because the topic field often determines the equivalent entity name in the destination system (for example, a database table or search index name). This SMT extracts the timestamp from the message value’s specified field, which is especially useful for log data in which the timestamp is stored as a field in the message. The message value must be a Map instance (Structs are not currently supported). See TimestampRouter to specify a basic topic pattern and timestamp format.
Installation¶
This transformation is developed by Confluent and does not ship by default with Apache Kafka® or Confluent Platform. You can install this transformation via the Confluent Hub Client:
confluent-hub install confluentinc/connect-transforms:latest
Example¶
The following example extracts a field named
timestamp,
time, or
ts
from the message value, in the order specified by the
message.timestamp.keys
configuration. This timestamp value is originally in the format specified by
message.timestamp.format. It adds a topic prefix and appends the timestamp
of the format specified by
topic.timestamp.format to the message topic.
"transforms": "MessageTimestampRouter", "transforms.MessageTimestampRouter.type": "io.confluent.connect.transforms.MessageTimestampRouter", "transforms.MessageTimestampRouter.topic.format": "foo-${topic}-${timestamp}", "transforms.MessageTimestampRouter.message.timestamp.format": "yyyy-MM-dd", "transforms.MessageTimestampRouter.topic.timestamp.format": "yyyy.MM.dd", "transforms.MessageTimestampRouter.message.timestamp.keys": "timestamp,time,ts"
Message value:
{"time":"2019-08-06"}
Topic (before):
bar
Topic (after):
foo-bar-2019.08.06. | https://docs.confluent.io/platform/6.2.0/connect/transforms/messagetimestamprouter.html | 2021-07-23T18:01:12 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.confluent.io |
cupy.atleast_2d¶
- cupy.atleast_2d(*arys)[source]¶
Converts arrays to arrays with dimensions >= 2.
If an input array has dimensions less than two, then this function inserts new axes at the head of dimensions to make it have two dimensions.
- Parameters
arys (tuple of arrays) – Arrays to be converted. All arguments must be
cupy.ndarrayobjects.
- Returns
If there are only one input, then it returns its converted version. Otherwise, it returns a list of converted arrays.
See also | https://docs.cupy.dev/en/stable/reference/generated/cupy.atleast_2d.html | 2021-07-23T18:18:26 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.cupy.dev |
ID2D1RenderTarget::PushAxisAlignedClip(constD2D1_RECT_F&,D2D1_ANTIALIAS_MODE) method (d2d1.h)
Specifies a rectangle to which all subsequent drawing operations are clipped.
Syntax
void PushAxisAlignedClip( const D2D1_RECT_F & clipRect, D2D1_ANTIALIAS_MODE antialiasMode );
Parameters
clipRect
Type: [in] const D2D1_RECT_F &
The size and position of the clipping area, in device-independent pixels.
antialiasMode
Type: [in] D2D1_ANTIALIAS_MODE
The antialiasing mode that is used to draw the edges of clip rects that have subpixel boundaries, and to blend the clip with the scene contents. The blending is performed once when the PopAxisAlignedClip method is called, and does not apply to each primitive within the layer.
Return value
None
Remarks
The clipRect is transformed by the current world transform set on the render target. After the transform is applied to the clipRect that is passed in, the axis-aligned bounding box for the clipRect is computed. For efficiency, the contents are clipped to this axis-aligned bounding box and not to the original clipRect that is passed in.
The following diagrams show how a rotation transform is applied to the render target, the resulting clipRect, and a calculated axis-aligned bounding box.
- Assume the rectangle in the following illustration is a render target that is aligned to the screen pixels.
- Apply a rotation transform to the render target. In the following illustration, the black rectangle represents the original render target and the red dashed rectangle represents the transformed render target.
- After calling PushAxisAlignedClip, the rotation transform is applied to the clipRect. In the following illustration, the blue rectangle represents the transformed clipRect.
- The axis-aligned bounding box is calculated. The green dashed rectangle represents the bounding box in the following illustration. All contents are clipped to this axis-aligned bounding box..
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as PushAxisAlignedClip) failed, check the result returned by the ID2D1RenderTarget::EndDraw or ID2D1RenderTarget::Flush methods. | https://docs.microsoft.com/es-ES/windows/win32/api/d2d1/nf-d2d1-id2d1rendertarget-pushaxisalignedclip(constd2d1_rect_f__d2d1_antialias_mode) | 2022-08-07T21:19:30 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.microsoft.com |
OCR
Optical Character Recognition is a service that converts scanned documents to a searchable and editable format, such as an MS Word document or a searchable PDF. To provide this functionality, you can either use the MyQ OCR (Optical Character Recognition) server, which can be purchased as a part of the MyQ solution,
or you can employ a third-party application.
For information on how to purchase the MyQ OCR server, please contact the MyQ sales department.
Activation and setup
The OCR feature has to be enabled on the Scanning & OCR settings tab, under OCR.
Choose the OCR server type from a Custom OCR server or the MyQ OCR Server.
You can change the folder where the scanned data is sent in the OCR working folder field. It is however, not recommended to change the default folder (C:\ProgramData\MyQ\OCR).
The OCR folder contains two sub-folders: in and out. In the in folder, the scanned documents are stored before being processed. In the out folder, the processed documents are saved by the OCR software and are ready to be sent.
A document sent to be processed by OCR is received with a certain delay, depending on the OCR software speed, and on the size of the document.
Running the OCR software on the same production server as MyQ may affect your system’s performance. | https://docs.myq-solution.com/print-server/8.2/ocr | 2022-08-07T21:54:04 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['../../print-server/8.2/166822162/image-20201207-142455.png?inst-v=0c2ae7d1-025f-4de5-814c-3cd0f5cd6f8b',
'Enabling OCR'], dtype=object) ] | docs.myq-solution.com |
unreal.GameMode¶
- class unreal.GameMode(outer=None, name='None')¶
Bases:
unreal.GameModeBase
GameMode is a subclass of GameModeBase that behaves like a multiplayer match-based game. It has default behavior for picking spawn points and match state. If you want a simpler base, inherit from GameModeBase instead.
C++ Source:
Module: Engine
File: GameMode.h
Editor Properties: (see get_editor_property/set_editor_property)
actor_guid(Guid): [Read-Write] Actor Guid: The GUID for this actor.
- Note: Don’t use VisibleAnywhere here to avoid getting the CPF_Edit flag and get this property reset when resetting to defaults.
See FActorDetails::AddActorCategory and EditorUtilities::CopySingleProperty for details.
allow_tick_before_begin_play(bool): [Read-Write] Allow Tick Before Begin Play:: Always relevant for network (overrides bOnlyRelevantToOwner).
auto_destroy_when_finished(bool): [Read-Write] Auto Destroy when Finished: If true then destroy self when “finished”, meaning all relevant components report that they are done and no timelines or timers are in flight.
auto_receive_input(AutoReceiveInput): [Read-Write] Auto Receive Input: Automatically registers this actor to receive input from a player.
block_input(bool): [Read-Write] Block Input: If true, all input on the stack below this actor will not be considered
call_pre_replication(bool): [Read-Write] Call Pre Replication
call_pre_replication_for_replay(bool): [Read-Write] Call Pre Replication for Replay
can_be_damaged(bool): [Read-Write] Can be Damaged: Whether this actor can take damage. Must be true for damage events (e.g. ReceiveDamage()) to be called. see: see: TakeDamage(), ReceiveDamage()
can_be_in_cluster(bool): [Read-Write] Can be in Cluster: If true, this actor can be put inside of a GC Cluster to improve Garbage Collection performance
custom_time_dilation(float): [Read-Write] Custom Time Dilation: Allow each actor to run at a different time speed. The DeltaTime for a frame is multiplied by the global TimeDilation (in WorldSettings) and this CustomTimeDilation for this actor’s tick.
data_layers(Array(ActorDataLayer)): [Read-Write] Data Layers: DataLayers the actor belongs to.
default_pawn_class(type(Class)): [Read-Write] Default Pawn Class: The default pawn class used by players.
default_player_name(Text): [Read-Write] Default Player Name: The default player name assigned to players that join with no name specified.
default_update_overlaps_method_during_level_streaming(ActorUpdateOverlapsMethod): [Read-Only] Default Update Overlaps Method During Level Streaming: see: UpdateOverlapsMethodDuringLevelStreaming
delayed_start(bool): [Read-Write] Delayed Start: Whether the game should immediately start when the first player logs in. Affects the default behavior of ReadyToStartMatch
enable_auto_lod_generation(bool): [Read-Write] Enable Auto LODGeneration: Whether this actor should be considered or not during HLOD generation.
find_camera_component_when_view_target(bool): [Read-Write] Find Camera Component when View Target: If true, this actor should search for an owned camera component to view through when used as a view target.
game_session_class(type(Class)): [Read-Write] Game Session Class: Class of GameSession, which handles login approval and online game interface
game_state_class(type(Class)): [Read-Write] Game State Class: Class of GameState associated with this GameMode.
generate_overlap_events_during_level_streaming(bool): [Read-Write] Generate Overlap Events During Level Streaming:] Hidden: Allows us to only see this Actor in the Editor, and not in the actual game. see: SetActorHiddenInGame()
hlod_layer(HLODLayer): [Read-Write] HLODLayer: The UHLODLayer in which this actor should be included.
hud_class(type(Class)): [Read-Write] HUDClass: HUD class this game uses.
ignores_origin_shifting(bool): [Read-Write] Ignores Origin Shifting: Whether this actor should not be affected by world origin shifting.
inactive_player_state_life_span(float): [Read-Write] Inactive Player State Life Span: Time a playerstate will stick around in an inactive state after a player logout
initial_life_span(float): [Read-Write] Initial Life Span: How long this Actor lives before dying, 0=forever. Note this is the INITIAL value and should not be modified once play has begun.
input_priority(int32): [Read-Write] Input Priority: The priority of this input component when pushed in to the stack.
instigator(Pawn): [Read-Write] Instigator: Pawn responsible for damage and other gameplay events caused by this actor.
is_editor_only_actor(bool): [Read-Write] Is Editor Only Actor: Whether this actor is editor-only. Use with care, as if this actor is referenced by anything else that reference will be NULL in cooked builds
is_spatially_loaded(bool): [Read-Write] Is Spatially Loaded: Determine if this actor is spatially loaded when placed in a partitioned world.
If true, this actor will be loaded when in the range of any streaming sources and if (1) in no data layers, or (2) one or more of its data layers are enabled. If false, this actor will be loaded if (1) in no data layers, or (2) one or more of its data layers are enabled.
layers(Array(Name)): [Read-Write] Layers: Layers the actor belongs to. This is outside of the editoronly data to allow hiding of LD-specified layers at runtime for profiling.
max_inactive_players(int32): [Read-Write] Max Inactive Players: The maximum number of inactive players before we kick the oldest ones out
min_net_update_frequency(float): [Read-Write] Min Net Update Frequency: Used to determine what rate to throttle down to when replicated properties are changing infrequently
min_respawn_delay(float): [Read-Write] Min Respawn Delay: Minimum time before player can respawn after dying.
net_cull_distance_squared(float): [Read-Write] Net Cull Distance Squared: Square of the max distance from the client’s viewpoint that this actor is relevant and will be replicated.
net_dormancy(NetDormancy): [Read-Write] Net Dormancy: Dormancy setting for actor to take itself off of the replication list without being destroyed on clients.
net_load_on_client(bool): [Read-Write] Net Load on Client: This actor will be loaded on network clients during map load
net_priority(float): [Read-Write] Net Priority: Priority for this actor when checking for replication in a low bandwidth or saturated situation, higher priority means it is more likely to replicate
net_update_frequency(float): [Read-Write] Net Update Frequency: How often (per second) this actor will be considered for replication, used to determine NetUpdateTime
net_use_owner_relevancy(bool): [Read-Write] Net Use Owner Relevancy: If actor has valid Owner, call Owner’s IsNetRelevantFor and GetNetPriority
num_bots(int32): [Read-Write] Num Bots: number of non-human players (AI controlled but participating as a player).
num_players(int32): [Read-Write] Num Players: Current number of human players.
num_spectators(int32): [Read-Write] Num Spectators: Current number of spectators.
num_travelling_players(int32): [Read-Write] Num Travelling Players: Number of players that are still traveling from a previous map
on_actor_begin_overlap(ActorBeginOverlapSignature): [Read-Write] On Actor Begin Overlap:] On Actor End Overlap: Called when another actor stops overlapping this actor. note: Components on both this and the other Actor must have bGenerateOverlapEvents set to true to generate overlap events.
on_actor_hit(ActorHitSignature): [Read-Write] On Actor Hit:.
on_begin_cursor_over(ActorBeginCursorOverSignature): [Read-Write] On Begin Cursor Over: Called when the mouse cursor is moved over this actor if mouse over events are enabled in the player controller.
on_clicked(ActorOnClickedSignature): [Read-Write] On Clicked: Called when the left mouse button is clicked while the mouse is over this actor and click events are enabled in the player controller.
on_destroyed(ActorDestroyedSignature): [Read-Write] On Destroyed: Event triggered when the actor has been explicitly destroyed.
on_end_cursor_over(ActorEndCursorOverSignature): [Read-Write] On End Cursor Over: Called when the mouse cursor is moved off this actor if mouse over events are enabled in the player controller.
on_end_play(ActorEndPlaySignature): [Read-Write] On End Play: Event triggered when the actor is being deleted or removed from a level.
on_input_touch_begin(ActorOnInputTouchBeginSignature): [Read-Write] On Input Touch Begin: Called when a touch input is received over this actor when touch events are enabled in the player controller.
on_input_touch_end(ActorOnInputTouchEndSignature): [Read-Write] On Input Touch End: Called when a touch input is received over this component when touch events are enabled in the player controller.
on_input_touch_enter(ActorBeginTouchOverSignature): [Read-Write] On Input Touch Enter: Called when a finger is moved over this actor when touch over events are enabled in the player controller.
on_input_touch_leave(ActorEndTouchOverSignature): [Read-Write] On Input Touch Leave: Called when a finger is moved off this actor when touch over events are enabled in the player controller.
on_released(ActorOnReleasedSignature): [Read-Write] On Released: Called when the left mouse button is released while the mouse is over this actor and click events are enabled in the player controller.
on_take_any_damage(TakeAnyDamageSignature): [Read-Write] On Take Any Damage: Called when the actor is damaged in any way.
on_take_point_damage(TakePointDamageSignature): [Read-Write] On Take Point Damage: Called when the actor is damaged by point damage.
on_take_radial_damage(TakeRadialDamageSignature): [Read-Write] On Take Radial Damage: Called when the actor is damaged by radial damage.
only_relevant_to_owner(bool): [Read-Write] Only Relevant to Owner: If true, this actor is only relevant to its owner. If this flag is changed during play, all non-owner channels would need to be explicitly closed.
optimize_bp_component_data(bool): [Read-Write] Optimize BPComponent Data: Whether to cook additional data to speed up spawn events at runtime for any Blueprint classes based on this Actor. This option may slightly increase memory usage in a cooked build.
options_string(str): [Read-Write] Options String: Save options string and parse it when needed
pauseable(bool): [Read-Write] Pauseable: Whether the game is pauseable.
pivot_offset(Vector): [Read-Write] Pivot Offset: Local space pivot offset for the actor, only used in the editor
player_controller_class(type(Class)): [Read-Write] Player Controller Class: The class of PlayerController to spawn for players logging in.
player_state_class(type(Class)): [Read-Write] Player State Class: A PlayerState of this class will be associated with every player to replicate relevant player information to all clients.
primary_actor_tick(ActorTickFunction): [Read-Write] Primary Actor Tick:] Relevant for Level Bounds: If true, this actor’s component’s bounds will be included in the level’s bounding box unless the Actor’s class has overridden IsLevelBoundsRelevant
replay_rewindable(bool): [Read-Write] Replay Rewindable:.
replay_spectator_player_controller_class(type(Class)): [Read-Write] Replay Spectator Player Controller Class: The PlayerController class used when spectating a network replay.
replicate_movement(bool): [Read-Write] Replicate Movement: If true, replicate movement/location related properties. Actor must also be set to replicate. see: SetReplicates() see:
replicated_movement(RepMovement): [Read-Write] Replicated Movement: Used for replication of our RootComponent’s position and velocity
replicates(bool): [Read-Write] Replicates: If true, this actor will replicate to remote machines see: SetReplicates()
root_component(SceneComponent): [Read-Write] Root Component: The component that defines the transform (location, rotation, scale) of this Actor in the world, all other components must be attached to this one somehow
runtime_grid(Name): [Read-Write] Runtime Grid: Determine in which partition grid this actor will be placed in the partition (if the world is partitioned). If None, the decision will be left to the partition.
server_stat_replicator_class(type(Class)): [Read-Write] Server Stat Replicator Class
spawn_collision_handling_method(SpawnActorCollisionHandlingMethod): [Read-Write] Spawn Collision Handling Method: Controls how to handle spawning this actor in a situation where it’s colliding with something else. “Default” means AlwaysSpawn here.
spectator_class(type(Class)): [Read-Write] Spectator Class: The pawn class used by the PlayerController for players when spectating.
sprite_scale(float): [Read-Write] Sprite Scale: The scale to apply to any billboard components in editor builds (happens in any WITH_EDITOR build, including non-cooked games).
start_players_as_spectators(bool): [Read-Write] Start Players as Spectators: Whether players should immediately spawn when logging in, or stay as spectators until they manually spawn
tags(Array(Name)): [Read-Write] Tags: Array of tags that can be used for grouping and categorizing.
update_overlaps_method_during_level_streaming(ActorUpdateOverlapsMethod): [Read-Write] Update Overlaps Method During Level Streaming:. see: bGenerateOverlapEventsDuringLevelStreaming, DefaultUpdateOverlapsMethodDuringLevelStreaming, GetUpdateOverlapsMethodDuringLevelStreaming()
use_seamless_travel(bool): [Read-Write] Use Seamless Travel: Whether the game perform map travels using SeamlessTravel() which loads in the background and doesn’t disconnect clients
- property delayed_start¶
[Read-Only] Delayed Start: Whether the game should immediately start when the first player logs in. Affects the default behavior of ReadyToStartMatch
- Type
-
- end_match() None ¶
Transition from InProgress to WaitingPostMatch. You can call this manually, will also get called if ReadyToEndMatch returns true
- get_match_state() Name ¶
Returns the current match state, this is an accessor to protect the state machine flow
- Return type
-
- is_match_in_progress() bool ¶
Returns true if the match state is InProgress or other gameplay state
- Return type
-
- property min_respawn_delay¶
[Read-Only] Min Respawn Delay: Minimum time before player can respawn after dying.
- Type
-
- property num_bots¶
[Read-Only] Num Bots: number of non-human players (AI controlled but participating as a player).
- Type
(int32)
- property num_travelling_players¶
[Read-Only] Num Travelling Players: Number of players that are still traveling from a previous map
- Type
(int32)
- on_set_match_state(new_state) None ¶
Implementable event to respond to match state changes
- Parameters
-
- ready_to_end_match() bool ¶
Returns true if ready to End Match. Games should override this
- Return type
-
- ready_to_start_match() bool ¶
Returns true if ready to Start Match. Games should override this
- Return type
- | https://docs.unrealengine.com/5.0/en-US/PythonAPI/class/GameMode.html | 2022-08-07T21:35:17 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.unrealengine.com |
Read the Docs Blog - Posts by Manuel Kaufmann 2022-05-09T00:00:00Z ABlog Announcing user-defined build jobs 2022-05-09T00:00:00Z 2022-05-09T00:00:00Z Manuel Kaufmann <div class="section" id="announcing-user-defined-build-jobs"> <p>We are happy to announce a new feature to specify user-defined build jobs on Read the Docs. If your project requires custom commands to be run in the middle of the build process, they can now be executed with the new config key <code class="docutils literal notranslate"><span class="pre">build.jobs</span></code>. This opens up a complete world full of new and exciting possibilities to our users.</p> <div class="section" id="background-about-the-build-process"> <h2>Background about the build process</h2> <p>If your project has ever required a custom command to run during the build process, you probably wished you could easily specify this. You might have used a hacky solution inside your Sphinx’s <code class="docutils literal notranslate"><span class="pre">conf.py</span></code> file, but this was not a great solution to this problem.</p> <p>That solution wasn’t supported, and it had another important limitation: it only ran the commands <em>from inside</em> Sphinx’s build command. It was impossible to run a command after cloning the repository, or before starting the Sphinx build process. With the addition of the new config key <code class="docutils literal notranslate"><span class="pre">build.jobs</span></code>, you can do these things and more!</p> </div> <div class="section" id="using-build-jobs-in-your-configuration-file"> <h2>Using <code class="docutils literal notranslate"><span class="pre">build.jobs</span></code> in your configuration file</h2> <p><a class="reference external" href="">Read the Docs’ build process</a> is well-defined and divided into these pre-defined jobs: <code class="docutils literal notranslate"><span class="pre">checkout</span></code>, <code class="docutils literal notranslate"><span class="pre">system_dependencies</span></code>, <code class="docutils literal notranslate"><span class="pre">create_environment</span></code>, <code class="docutils literal notranslate"><span class="pre">install</span></code>, <code class="docutils literal notranslate"><span class="pre">build</span></code> and <code class="docutils literal notranslate"><span class="pre">upload</span></code>. Now, with the introduction of <code class="docutils literal notranslate"><span class="pre">build.jobs</span></code>, you can hook these with custom commands that run before & after.</p> <p>Let’s say your project requires to run a command immediately <em>after</em> clonning the repository in the <code class="docutils literal notranslate"><span class="pre">checkout</span></code> job. In that case, you will want to use the <code class="docutils literal notranslate"><span class="pre">build.jobs.post_checkout</span></code> config key:</p> <div class="highlight-yaml notranslate"><div class="highlight"><pre><span></span><span class="nt">version</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">2</span><span class="w"></span> <span class="nt">build</span><span class="p">:</span><span class="w"></span> <span class="w"> </span><span class="nt">os</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">ubuntu-22.04</span><span class="w"></span> <span class="w"> </span><span class="nt">tools</span><span class="p">:</span><span class="w"></span> <span class="w"> </span><span class="nt">python</span><span class="p">:</span><span class="w"> </span><span class="s">"3.10"</span><span class="w"></span> <span class="w"> </span><span class="nt">jobs</span><span class="p">:</span><span class="w"></span> <span class="w"> </span><span class="nt">post_checkout</span><span class="p">:</span><span class="w"></span> <span class="w"> </span><span class="c1"># Unshallow the git repository to</span><span class="w"></span> <span class="w"> </span><span class="c1"># have access to its full history</span><span class="w"></span> <span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">git fetch --unshallow</span><span class="w"></span> </pre></div> </div> <p>In this example, Read the Docs will run <code class="docutils literal notranslate"><span class="pre">git</span> <span class="pre">clone</span> <span class="pre">...</span></code> as the first command and after that will run <code class="docutils literal notranslate"><span class="pre">git</span> <span class="pre">fetch</span> <span class="pre">--unshallow</span></code>. Then it will continue with the rest of the remaining pre-defined jobs.</p> <p>To learn more about this how to use this new feature, read the <a class="reference external" href="">build customization</a> documentation page.</p> </div> <div class="section" id="the-future-of-the-builders"> <h2>The future of the builders</h2> <p>We are already discussing how we can expand this feature in the future. We’d like to support more complex build processes, or even projects that are not using Sphinx or MkDocs. That would open the door to a lot new possibilities. So, <a class="reference external" href="">subscribe to our mailing list</a> and stay tuned!</p> </div> <div class="section" id="try-it-out-now"> <h2>Try it out now!</h2> <p>We encourage you to give it a try now to power up your documentation’s build process. As we mentioned, we are designing the future of our builders and we are collecting ideas from our users. Your feedback will allow us to ensure our design is flexible and customizable enough, and that we’re supporting as many use cases as possible.</p> <p>Please, contact us at <a class="reference external" href="mailto:support%40readthedocs.com">support<span>@</span>readthedocs<span>.</span>com</a> to let us know how you are using <code class="docutils literal notranslate"><span class="pre">build.jobs</span></code>!</p> </div> </div>> Better support for scientific project documentation 2020-04-28T00:00:00Z 2020-04-28T00:00:00Z Manuel Kaufmann <div class="section" id="better-support-for-scientific-project-documentation"> <p>In the past year, we’ve been having issues when building projects’ documentation using <code class="docutils literal notranslate"><span class="pre">conda</span></code>. Our build servers were running out of memory and failing projects’ builds. Read the Docs has this memory constraint to avoid misuse of the platform, causing a poor experience for other users.</p> <p>Our first solution was a dedicated server for these kind of projects. We would manually assign them to this server on user requests. This workaround worked okay, but it involves a bad experience for the user and also us doing a manual step each time. Over time, we hit again the same issue of OOM, even giving all the memory available to one project to build its documentation. After some research, we found that <a class="reference external" href="">this is a known issue in the conda community</a> and there are some different attempts to fix it (like <a class="reference external" href="">mamba</a>). Unfortunately, none of them became the standard yet and the problem is still there.</p> <p>Meanwhile, in another completely different set of work, we were migrating all of our infrastructure to a different architecture:</p> <ul class="simple"> <li>Azure <em>Virtual machine scale sets</em> for autoscaling our servers,</li> <li>Azure <em>storage accounts</em> to store all the user’s documentation</li> <li>Proxito, an internal service to remove a lot of the state from our servers (more about this migration coming in a future post)</li> </ul> <p>This helped us to <em>reduce costs</em> and allowed us to spin up <em>bigger instances</em> for the builders. We have also made some other important operational changes:</p> <ul class="simple"> <li>Our builders are now single-process, giving all the memory available to only one project without worrying about affecting others.</li> <li>We added <a class="reference external" href="">custom task router</a> that routes small builds to small servers (3GB RAM), and big builds to larger servers (7GB RAM). This removes the need for users to ask us to upgrade their isntances.</li> <li>Assigned all <code class="docutils literal notranslate"><span class="pre">conda</span></code> projects to be built by big servers by default.</li> </ul> <p>If you ever had a memory issue on Read the Docs, we’d appreciate if you give it a try again. Please <a class="reference external" href="mailto:support%40readthedocs.org">let us know</a> about your experience when building scientific documentation. If you know that any of your friends were hitting this issue in the past, we encourage you to talk to them and tell them to give it a try.</p> <p>We are excited to have more scientific documentation on our platform and we are doing our small part to make this happen and have better science in the world.</p> <> | https://blog.readthedocs.com/archive/author/manuel-kaufmann/atom.xml | 2022-08-07T21:40:14 | CC-MAIN-2022-33 | 1659882570730.59 | [] | blog.readthedocs.com |
Quick start
In this guide you will go through the required steps to integrate Astara Connect in your own systems or platforms.
1.- Creating an account
In order to start using our APIs, you will need to create an account. Once you have registered, come back to this page to continue.
Authentication methods
Astara Connect offers you two main methods for authenticating:
- Your current user (using your email and password)
- An API token created for your organization.
With email and password
By default, Astara Connect allows you to authenticate your user using the email and password that you have used to register in the site.
You can login using the endpoint:
With a payload like this:
You will get a JWT token that you will have to use within your API calls to authenticate:
With an API token
If you want to avoid using your own credentials, you can create a token that will identify the organization instead of you.
For creating the token, use the following endpoint:
This will return an API key that you can use to authenticate your API calls using the following HEADER in your calls:
Warning
You cannot recover the key. If you don't save it the system cannot show it to you as is fully encrypted. You will need to generate a new key, and replace it.
2.- Registering your first tracker
Once you have an account, you will need to register a tracker so you can start getting geo-data that Astara Connect will process and give you insights.
A tracker is a device that is connected to a vehicle (or moving object) that will be sending at least the following information:
- Latitude.
- Longitude.
- Timestamp.
In order to register it, you will need to find out the default group (organization is named right now but will be changed into group in coming weeks) that your account has access to.
There you will have the OIDs and extra data of the groups where you can register a tracker to. Pick one, and save the OID for using it later on.
Now, register the tracker:
You will have to send a JSON payload similar to this:
Info
The object_id can store any metadata from your vehicle. By default, we recognize some keys like license_plate, make, model, version, vin, ean, serial_number as those are common fields within the cars, vans, bikes, kickscooters and trucks. But you can store as much information as you want.
To finish the tracker registration, you will need to launch the validation procedure. It's a security step that lets us know that you have the rights to access the tracker data.The validation procedure could work on a synchronous or asynchronous way. To find out the results, you can look in the tracker's validation status field. As usual and if everything goes well the status will be pending for data. It means that the tracker was registered successfully and the system is waiting to receive data from it. Once we have obtained any positioning data of the tracker, the whole connection process would be completed (final status reached: connected).
3.- Getting telemetrics data from your tracker
Now that you have the tracker, as soon as the tracker is validated by the system, you will be able to start getting telemetrics data from it like its last position. In order to check if the tracker is receiving data properly, get the tracker and check its status value. If its value is true, then, your tracker is fully working within Astara Connect:
And it will return something like this:
Then, with that tracker's oid you can get its last saved position:
And that will give you something similar to this:Astara Connect provides many more endpoints for the telemetrics, like computed trips, moving-stopped trackers, etc. Check the telemetrics section as well as all our endpoints.
4.- Next steps
This guide has covered the main basics, for setting you up quickly. Now we recommend you to explore our documentation and get an idea about what Astara Connect can offer you. We recommend you to read the following sections of our documentation: | https://docs.astaraconnect.com/quickstart | 2022-08-07T23:00:18 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.astaraconnect.com |
3. Getting started with V-IPU
After installing the
vipu tool, the next step is to establish connectivity
with the V-IPU controller. This assumes that the system administrator has created
a user account for you and allocated some IPUs to that user ID.
You will need the following information from the V-IPU administrator:
V-IPU controller host name or IP address
V-IPU controller port number
User ID and API access key
Run the following command to make the server report its version:
$ vipu --server-version --api-host 10.1.2.3 --api-port 9090 --api-user-id alice --api-access-key 685XK4uCzN
This example assumes a V-IPU controller running on host 10.1.2.3 with port 9090. You should use the details provided by your system administrator.
To avoid having to add options to each command, you can specify the server details in environment variables:
$ export VIPU_CLI_API_HOST=10.1.2.3 $ export VIPU_CLI_API_PORT=9090 $ export VIPU_CLI_API_USER_ID=alice $ export VIPU_CLI_API_ACCESS_KEY=685XK4uCzN $ vipu --server-version
Alternatively, you can add them to a configuration file:
$ cat ~/.vipu.hcl api-host=10.1.2.3 api-port=9090 api-user-id=alice api-access-key=685XK4uCzN $ vipu --server-version
The next step is to allocate some IPUs to run your software on.
3.1. Creating a partition
This section explains how to create a usable partition on the IPU system. A partition defines a set of IPUs used for running end-user applications.
The simplest way to get started is to create a “reconfigurable” partition. This makes a set of IPUs available to users in a flexible way as a number of single-IPU device IDs and a set of multi-IPU device IDs if the partition is larger than a single IPU.
You can do this.
When you create a partition, a file is created in the directory
$HOME/.ipuof.conf.d/.
This file contains information needed by Poplar to connect to the IPUs. Note that there
should only be one file in the directory, so you should delete the partition (with the
vipu remove partition partition-name command) before creating another one.
In a fully deployed system, you may want to define partitions containing sets of IPUs configured in specific ways.
See the V-IPU User Guide for full details of the V-IPU command-line software, allocating IPUs for your program, and running code on the IPU-POD.
3.2. Running a program on the IPU-POD
You can now run one of the example programs from the SDK. The program
adder_ipu.cpp builds a simple graph to add two vectors together and return the result.
Make a copy of the
poplar/examples.3. Hardware and software monitoring
The IPU hardware, and any programs running on it, can be monitored and analysed in various ways.
3.3.
3.3.2, Running a program on the IPU-POD, try running:
$ POPLAR_LOG_LEVEL=DEBUG ./adder_ipu
You will see each stage of the process listed before the program output appears. The logging options are documented in the Poplar & PopLibs User Guide.
3.3.3.. | https://docs.graphcore.ai/projects/ipu-pod-getting-started/en/latest/programs.html | 2022-08-07T22:22:38 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.graphcore.ai |
MyQ Recharge Terminal
About MyQ Recharge Terminal 8.2 LTS
The MyQ Recharge Terminal is a device with a welded metal construction (2mm thick steel) that is used as an extension to the MyQ credit accounting. The device provides MyQ users with the option to view and increase their credit balance.
In the basic version, the MyQ Recharge Terminal is equipped with a 19-inch full-color touch display and a NanoPC. The MyQ Recharge Terminal can be equipped with the following optional hardware:
Card reader
Coin detector with a self-locking coin bag
Bill reader
Card dispenser
Receipt printer
This guide walks the MyQ administrator through the setup and administration of the MyQ Recharge Terminal, the operator through the setup and management of the MyQ Recharge Terminal, and the MyQ user through the usage of the MyQ Recharge Terminal. | https://docs.myq-solution.com/recharge-terminal/8.2/ | 2022-08-07T23:11:53 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['../../recharge-terminal/8.2/689700995/RechargeTerminal.png?inst-v=0c2ae7d1-025f-4de5-814c-3cd0f5cd6f8b',
'Recharge terminal'], dtype=object) ] | docs.myq-solution.com |
Measurements¶
Existing meters¶
For the list of existing meters see the tables under the Measurements page of Ceilometer in the Administrator Guide.
New measurements¶
Ceilometer is designed to collect measurements from OpenStack services and from other external components. If you would like to add new meters to the currently existing ones, you need to follow the guidelines given in this section.
Types¶
Three type of meters are defined in Ceilometer:
When you’re about to add a new meter choose one type from the above list, which is applicable.
Units¶
Whenever a volume is to be measured, SI approved units and their approved symbols or abbreviations should be used. Information units should be expressed in bits (‘b’) or bytes (‘B’).
For a given meter, the units should NEVER, EVER be changed.
When the measurement does not represent a volume, the unit description should always describe WHAT is measured (ie: apples, disk, routers, floating IPs, etc.).
When creating a new meter, if another meter exists measuring something similar, the same units and precision should be used.
Meters and samples should always document their units in Ceilometer (API and Documentation) and new sampling code should not be merged without the appropriate documentation.
Naming convention¶
If you plan on adding meters, please follow the convention below:
Always use ‘.’ as separator and go from least to most discriminant word. For example, do not use ephemeral_disk_size but disk.ephemeral.size
When a part of the name is a variable, it should always be at the end and start with a ‘:’. For example, do not use <type>.image but image:<type>, where type is your variable name.
If you have any hesitation, come and ask in #openstack-telemetry
Meter definitions¶. | https://docs.openstack.org/ceilometer/latest/contributor/measurements.html | 2022-08-07T23:06:05 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.openstack.org |
My WebLink
|
|
About
|
Sign Out
1989 Oct Vol 21 No 3 Town Reporter SRP Undergrounding Police Dept 25 Years with Photos
>
Town Clerk
>
Permanent Records
>
Permanent
>
1989 Oct Vol 21 No 3 Town Reporter SRP Undergrounding Police Dept 25 Years with Photos
Metadata
Thumbnails
Annotations
Entry Properties
Last modified
7/14/2016 4:17:34 PM
Creation date
7/11/2016 5:12:44 PM
Metadata
Fields
Template:
Clerk General
Dept
Town Clerk
Sub
Administration and Management
Content
Publications Produced by Public Body-Historical
Description
SRP Undergrounding, Police Department
Publication
Town Reporter
Year
1989 <br />Corner <br />Mayor Rob er t W. Pl enge <br />Commu.nication b etween <br />Town Hall and Citize ns <br />As we approach the end of 1989 and I <br />reflect upon the successes and short· <br />comings of our Town Council, I wou ld <br />like to share some thoughts with you. <br />First, it has been a period of relative <br />stability and one wi thout some of the <br />hurdles of years past. We have not had <br />to fend off those who would convert <br />our Town from a residential commun· <br />ity to one dotted with commercial <br />development. We have received sup· <br />port and cooperation from residents <br />and deve lopers for our program of <br />e liminating utili ty poles and under· <br />grounding lines and cables. We have <br />seen positive results from our higher <br />standards and stricter enforcement of <br />weed control and property mainten· <br />ance. And I am sure you are aware <br />of street improvements on Lincoln, <br />Mockingbird and the Ta tum curve. <br />Those of you who have visited the <br />Town Hall or Police Department dur· <br />ing business hours have probably <br />noticed a busy organization working in <br />very close quarters, particula rl y since <br />the addition of several police officers. <br />We are concerned a lso with the con& <br />tion and appearance o f the yard where <br />our street cleaning and other equip· <br />ment is serviced. Therefore, the Town <br />recently acquired additional property <br />along Lincoln east of the Town Hall for <br />potential new faci li ties. We currently <br />have a sub-committee working on pre· <br />lim inary p lans toward that end. <br />I began by referring to the Town <br />Council. I am pleased to te ll you t hat <br />we have a dedicated group of men and <br />women who work well together. Cer· <br />ta in ly there are differences of opinions, <br />and decisions are not always unani· <br />mous. However, there is an underlying <br />spirit of respect and cooperation wh ich <br />a ll ows us to solve problems and man· <br />age the Town's business effectively. <br />On February 6, 1990 you will have an <br />opport,unity to select a new Council. It <br />is my opinion that whi le experience <br />and continuity are important, it is a lso <br />good to have "new blood" involved <br />periodically. If there are t hose who you <br />believe should serve the Town in some <br />capacity, encourage them to get <br />involved. Elsewhere in th is Town <br />Reporter you will learn how to become <br />a candidate for Town Council. P lease <br />feel free to call me or the Town Clerk, <br />May Ann Brines for further <br />information . <br />New Police <br />Facility Planned <br />In order to maintain their high level of <br />excell ence, the Paradise Va ll ey Police <br />Department is looking forward to <br />expanded quarters. Over the years, <br />the addition of new officers, computer· <br />ized systems and expanded services <br />has exceeded the capabilities of the <br />present building and created the need <br />for a larger facility. <br />TOWN BBPDBTBR <br />TOWN OF PARADISE VALLEY ::'.:.::; ~~~~~~·::.;~~ ..... , <br />Published periodically by th e Town Counci l <br />The Town Council has approved <br />$750,000 in seed money from the cur· <br />rent fiscal budget to get the project <br />underway. The project will extend into <br />next fisca l year with additional budget <br />allocations. The Town will issue an <br />RF P for architectural services in the <br />next month . <br />Street <br />Improvement <br />Projects Update <br />• MacDonald Drive · 59th P lace to <br />Town's east boundary <br />Street will be repaved with the addition <br />of curbs. MacDonald will remain a 35 <br />MPH, two -lane street wit h a turn lan e. <br />Federal funds are expected for a por· <br />tion of this project. <br />• MacDonald/Ta tum intersection <br />realignment <br />Various designs are under discussion. <br />$500,000 has been b udgeted from the <br />1989-90 fisca l year for th is project. <br />• Mockingbird Lane south o f Lincoln <br />Drive v'.i ill be resurfaced and all but the <br />69 kV lines will be undergrounded. <br />Bike paths and landscaping wi ll be <br />added. $230,000 has been budgeted. <br />• Tatum curve realignment <br />The major portion of construction is <br />scheduled for completion by October <br />15, 1989. This will include signal li ghts <br />and road striping. A firm has been <br />h ired to design and implement pedes· <br />trian paths, bike paths and landscaping <br />a long Tatum Boulevard. <br />Mayor Offic e Home <br />Robert W. Plenge 948· 7 411 948·0406 <br />Vice Mayo r <br />Joan R. Lincol n 948-7411 948-58 13 <br />Councilmen <br />John E. Mill er, Jr. 948-7411 948-79 15 <br />Sara D. Moya 948-741 1 99 1-1 906 <br />Ri chard R. Mybeck 948-74 11 948 -2243 <br />ScottH.O'Connor 948-74 11 99 1-5597 <br />Kent D. Wick 948-741 1 95 1-9099 <br />Town Manager <br />Joh n L. Baudek 948-74 11 <br />Town Clerk <br />Mary A nn Bri nes 948-7411 <br />Polic e Chief <br />Donald D. Lozier 948-741 8 <br />(Em ergency) 911
The URL can be used to link to this page
Your browser does not support the video tag. | https://docs.paradisevalleyaz.gov/WebLink/0/doc/26213/Page2.aspx | 2022-08-07T22:47:49 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.paradisevalleyaz.gov |
Release Notes: July 7th, 2022
Public API and the documentation
>
Getting Started
Get started with Syntropy Stack with our three-minute quick start video, or follow the documentation for more details. Use Syntropy Stack to streamline your container and multi-cloud deployments and automate network orchestration.
Getting about 1 month ago | https://docs.syntropystack.com/docs/release-notes-july-7th-2022 | 2022-08-07T21:40:19 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['https://files.readme.io/3062bd6-Syntropy_Stack_Release_Notes_1000x420_07072x.jpg',
'Syntropy Stack Release Notes 1000x420 [email protected] 2160'],
dtype=object)
array(['https://files.readme.io/3062bd6-Syntropy_Stack_Release_Notes_1000x420_07072x.jpg',
'Click to close... 2160'], dtype=object)
array(['https://files.readme.io/b1abbd8-One_Vertical_16_9.png',
'One Vertical 16_9.png 1259'], dtype=object)
array(['https://files.readme.io/b1abbd8-One_Vertical_16_9.png',
'Click to close... 1259'], dtype=object) ] | docs.syntropystack.com |
AI Risk Assessment Algorithm
AI's risk management advantages are more reflected in the financial service field. Artificial intelligence and big data are the right arms of financial risk control, and both are indispensable. The application in the evaluation of financial risks and returns can help solve the problem of information asymmetry, provide accurate user needs, and improve the efficiency of farming returns.
The high-quality knowledge graph provides an effective relationship between information, which is helpful to reduce the problem of information asymmetry; the deep learning method facilitates engine iteration, which is beneficial to provide users with the most suitable investment pool according to user needs. Based on a multi-factor model, the recommendation algorithm is conducive to predicting product risks and returns.
The expected return of the mining pool is the compensation for the risks undertaken by the profit farming participants, and the multi-factor model is a quantitative expression of the risk-return relationship. The multi-factor model quantitatively describes the linear relationship between the expected return rate of the mining pool and the factor load (risk exposure) of the mining pool on each factor, and the linear relationship between the factor return rate of each factor per unit factor load (risk exposure) . Multi-Factor Model (
Multiple-Factor Model, MFM
) is a complete risk model developed based on the idea of the APT model.
The general expression of the multi-factor model:
Where:
Xik = Factor exposure of pool i on factor k
fk = Factor return for k
μi = The residual rate of return of pool i
Protocol - Previous
NFT Oracles
Investors and Partners
Last modified
1yr ago
Copy link | https://docs.theforce.trade/protocol/ai-risk-assessment-algorithm | 2022-08-07T21:29:54 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.theforce.trade |
Denial of Service (DoS) cyber-attacks are on the rise, with victims ranging from individuals to large organisations. Such attacks can lead to significant disruption, with lost revenue, damaged reputation and increased costs just some of the implications. The denial of service threat has become such that many businesses are starting to realise that it is vital to take serious steps to protect their systems and data against it. Here we explore precisely what is involved in a denial of service cyber-attack, and provide advice on how to prevent one with some top cybersecurity tips.
What is a Denial of Service Attack?
Denial service attacks are deliberate attempts to disrupt access to computer networks, and prevent users from accessing the information they need. Cyber criminals may target email accounts, websites, online banking systems and even military intelligence networks in this way.
Most commonly, a DoS attack will involve a network being flooded with information so that the server capacity becomes overloaded, slowing down systems or preventing users from accessing their systems altogether.
A distributed denial of service cyber-attack (DDoS) is when an attacker makes it impossible for a service to be delivered. This tends to involve preventing access to services, servers, devices, networks and applications by sending malicious data or requests from multiple systems.
Attackers will literally drown systems with requests for data, usually by sending a web server so many requests to serve a page that it crashes, or by hitting a database with large volumes of queries. It all leads to CPU and RAM capacity and internet bandwidth becoming overwhelmed.
What are the types of denial service attacks?
There are three main types of denial of service threats:
1. Volume-based attacks
These DoS attacks involve sending massive volumes of bogus traffic with the intention of overwhelming a website or server.
2. Protocol or network layer attacks
These denial service attacks send lots of packets to targeted networks. Examples include SYN floods and Smurf DDoS.
3. Application layer attacks
These cyber-attacks work by flooding applications with maliciously designed requests.
Whatever the type of DoS attack, the objective remains the same: to render online resources slow or completely unresponsive.
What motivates cyber-attackers to launch denial of services attacks?
As is the case with the majority of cybercrimes, there is financial motive behind DoS attacks. However, that’s not all that compels attackers.
Over the past few years, it has become apparent that online ‘vandalism’ is also a motivator, with attackers targeting organisations that they may have issue with, perhaps because of ethical failings.
How to identify a denial of service cyber-attack?
If a network becomes unusually slow, or goes down completely, this could be a warning sign of a DoS attack. Files and websites may take longer to open, or could be inaccessible altogether. There could be an increase in spam emails in an attempt to overwhelm an account, so as to block receipt of legitimate messages.
The best way to detect a DoS attack is through network traffic monitoring and analysis. This can be done using a firewall. It is good practice to set up rules that alert network managers when an unusual level of traffic is detected, so that they can check for an attack.
Once a denial service attack has commenced, there is little that can be done to thwart it. It is therefore best to try to mitigate the risk of such an attack occurring in the first place.
Any organisation can be susceptible to a DoS attack, whatever its industry, whatever its size. Cyber-attacks are on the rise and have been growing since the start of the pandemic, making it vital that businesses take steps to protect themselves against the denial of service threat.
How to prevent denial service attacks?
DoS attackers are constantly updating their methods and changing tactics, taking advantage of emerging vulnerabilities and orchestrating new types of attacks.
Cybersecurity specialists recommend that organisations take steps to defend their networks from hacker threats by adopting the following strategies:
- Monitoring of the global denial of service threat – an understanding of the latest threats being posed around the world is vital.
- Always installing security and system updates – security patch management is one of the most important aspects of cybersecurity.
- Monitoring systems and devices – ongoing monitoring makes it more straightforward to identify system anomalies as soon as they arise.
- Carrying out risk assessments – these help identify vulnerabilities and the potential effects of significant downtime.
- Ensuring firewalls are correctly configured – setting the right filters and rules will help protect networks from unauthorised access, and content filtering devices will also assist in this respect.
- Limiting inbound connections – if inbound connections to a mail server are allowed, this can leave it susceptible to DDoS attacks. It is also good practice to limit the size of emails and attachments to prevent overwhelming attacks.
- Training staff – cybersecurity awareness training is one of the best ways to prevent cyber-attacks, and is important across all organisations, whatever their size.
Preventing denial service attacks with cybersecurity expertise from PC Docs
There is no denying that DoS attacks are on the rise, and that cyber criminals are becoming increasingly sophisticated in their tactics. But there are steps you can take to protect your business. expert advice to help you instil good cyber security habits across your organisation.
To learn how we can help keep your organisation safeguarded against all the latest cyber threats, including denial of service attacks, you are welcome to get in touch. | https://www.pc-docs.co.uk/how-denial-of-service-attacks-are-a-threat-to-cybersecurity/ | 2022-08-07T23:03:38 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['https://www.pc-docs.co.uk/wp-content/uploads/2021/05/Depositphotos_196118206_s-2019-e1620960990886.jpg',
None], dtype=object) ] | www.pc-docs.co.uk |
A Team is a subdivision of an organization with associated users, projects, credentials, and permissions. Teams provide a means to implement role-based access control schemes and delegate responsibilities across organizations. For instance, permissions may be granted to a whole Team rather than each user on the Team.
You can create as many Teams of users as make sense for your Organization. Each Team can be assigned permissions, just as with Users. Teams can also scalably assign ownership for Credentials, preventing multiple Tower interface click-throughs to assign the same Credentials to the same user.
Access the Teams page by clicking the Teams (
) icon from the left navigation bar. The Teams page allows you to manage the teams for Tower. The team list may be sorted and searched by Name or Organization.
To create a new Team:
Click the
button.
Enter the appropriate details into the following fields:
Name
Description (optional)
Organization (Choose from an existing organization)
Click Save.
Once the Team is successfully created, Tower opens the Details dialog, which also allows you to review and edit your Team information. This is the same menu that is opened if the Edit (
) button is clicked from the Teams link. You can also review Users and Permissions associated with this Team.
This tab displays the list of Users that are members of this Team. This list may be searched by Username, First Name, or Last Name. For more information, refer to Users.
In order to add a user to a team, the user must already be created in Tower. Refer to Create a User to create a user. Adding a user to a team adds them as a member only, specifying a role for the user on different resources can be done in the Permissions tab . To add existing users to the Team:
Click the
button.
Select one or more users from the list of available users by clicking the checkbox next to the user(s) to add them as members of the team.
In this example, one user has been selected to be added to this team.
Click the Save button when done.
Selecting the Permissions view displays a list of the permissions that are currently available for this Team. The permissions list may be sorted and searched by Name, Inventory, Project or Permission type.
The set of privileges assigned to Teams that provide the ability to read, modify, and administer projects, inventories, and other Tower elements are permissions. By default, the Team is given the “read” permission (also called a role).
Permissions must be set explicitly via an Inventory, Project, Job Template, or within the Organization view.
To add permissions to a Team:. | https://docs.ansible.com/ansible-tower/3.8.6/html/userguide/teams.html | 2022-08-07T22:16:21 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.ansible.com |
I use Blazor since nearly one year but I have many problem to publish my app.
The last one I wrote was in .Net 5 with ASP.NET hosting so I have 3 project :
myproject.client
myproject.server
myproject.shared
In Visual Studio 2019 it works great with my web API in server project but I can't publish it on IIS.
I made publish in directory on VS and I created a website pointing to this directory.
I get Loading... like in all Blazor app but it stay like that.
When I press F12 I ger in console :
can you help me please ? | https://docs.microsoft.com/en-us/answers/questions/175462/blazor-webassembly-publication-issue-on-iis.html | 2022-08-07T22:49:57 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['/answers/storage/attachments/42745-image.png', '42745-image.png'],
dtype=object) ] | docs.microsoft.com |
>>.
If you are located in a different timezone, time-based searches use the timestamp of the event from the Splunk instance that indexed the data. See How time zones are processed by the Splunk platform. Relative time range options to specify a custom time range for your search that is relative to Now or the Beginning of the current Days Ago, the Earliest snap to value is Beginning of today.
The preview boxes below the fields update to the time range as you make the selections.
To learn more about relative time ranges, see Specify time modifiers in your search.
Define custom Real-time time ranges
- Splunk Cloud Platform
- Real-time search is not enabled by default in Splunk Cloud Platform.
- Splunk Enterprise
- The Real-time option enables Splunk Enterprise users to specify a custom Earliest time for a real-time search. Because this time range is for a real-time search, a Latest time is not relevant.
-).
- Splunk Cloud Platform
- To set the default time ranges for the API, request help from Splunk Support. If you have a support contract, file a new case using the Splunk Support Portal at Support and Services. Otherwise, contact Splunk Customer Support. Splunk Cloud Platform users don't have shell access to the Splunk Cloud Platform deployment and can't use the CLI to set default time ranges.
- Splunk Enterprise
- Prerequisites
- Only users with file system access, such as system administrators, can change time ranges manually in the
times.conffile.
-
times.conffile for the Search app. For example,
$SPLUNK_HOME/etc/apps/<app_name>/local.
- Create a stanza for the time range that you want to specify. For examples, see the times.conf reference in the Admin Manual.
Change the default time range
The default time range for ad hoc searches in the Search & Reporting App is set to Last 24 hours. An administrator can set the default time range globally, across all apps. See Change default values in the Admin Manual.! | https://docs.splunk.com/Documentation/Splunk/7.3.6/Search/Selecttimerangestoapply | 2022-08-07T21:58:34 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
# Node Owner Guide
Nodes are the critical components of Waves ecosystem. By running a node, you can help processing transactions and get profit for securing the network.
Every Waves node is a full node that takes part in the decentralized process of block creation by storing full blockchain data, passing the data to other nodes (relay blocks and transactions to miners), and checking that the newly added blocks are valid. Validation implicates ensuring that the format of the block is correct, all hashes in the new block were computed correctly, the new block contains the hash of the previous block, and each transaction in the block is validated and signed by the appropriate parties (answer end user queries about the state of the blockchain). Any node can propose new transactions, and these proposed transactions are propagated between nodes until they are eventually added to a block.
Nodes can be used for mining (generating new blocks). A mining node checks that each transaction is self-valid since the other nodes would reject the block if it includes invalid transactions. A node can have zero balance, but to start mining, a node must have the minimum balance of 1000 WAVES (including Waves that are leased to the node). The WAVES you own (or that have been leased to you) reflect your mining power, the more you own, the higher your chances of processing the next block and receiving the transaction fees as a reward. The final amount also depends on overall network activity and the amount of generated fees.
Note: You can find the list of the existing nodes at dev.pywaves.org.
Reasons to run node:
- Mining: earn profit for generating new blocks and transaction fees.
- Own project: get the latest blockchain data from your own node without having to trust third party. Send transactions from your own node. Use your node API, to be independent from third party. Tweak your own node to setup extended functionality for your project.
For details about Waves protocol, blockchain scalability and rewards see Waves-NG Protocol article.
# Install a Node
There are different options to install Waves node. The installation methods are explained in Install Waves Node article.
# Get Actual Blockchain
A running node requires blockchain database. Use one of the methods described in Synchronize Waves Blockchain article to get the latest blockchain database.
# Upgrade a Node
When you own a node, check the Releases page for the latest updates on a regular basis. Releases of new versions of node come with release notes document describing the new features and telling the node owner what actions to take to upgrade, depending on the type of the update. For details about upgrading see Upgrade Waves Node article.
# Deal With Forks
Fork is the moment when a blockchain splits in two seperate ones. Forks can happen because of node version difference (for example, when your node is of older version and does not support functionality of the newer ones). Also, forks can be caused by malicious attacks or system failure. A running node receives information from other nodes and monitors "the best blockchain" (that is the one that has the biggest generating balance). If the node spots a "better" blockchain that had split (forked) from the current one not more than 100 blocks ago, it can automatically switch to it. If it split more than 100 blocks ago a forked node continues generating blocks, but it does not communicate with other valid nodes.
You can check the blockchain height or the last 100 signatures of blocks to understand if your node is on fork or not. Use chaincmp utility to compare blockchains on your node and the reference nodes. Chaincmp utility indicates whether you are on the same blockchain with the reference nodes and if not it provides recommendations on further actions.
Your node can be on fork with height less than 2000 blocks or more than 2000 blocks.
- In case that your node is on fork with a height less than 2000 blocks, you can implement rollback and restart the node to begin generating blocks as described in Rollback Waves Node article.
- Otherwise, you need to go with one of the options described in Synchronize Waves Blockchain article.
# Node Go
In addition to standard Node implementation on Scala programming language, there is another (alternative) Node implementation on Go language. | https://docs.waves.tech/en/waves-node | 2022-08-07T23:16:12 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.waves.tech |
UniformSampleFromScaleVector
- class sherpa.sim.sample.UniformSampleFromScaleVector[source] [edit on github]
Bases:
sherpa.sim.sample.UniformParameterSampleFromScaleVector
Use a uniform distribution to sample statistic and, num=1, factor=4, numcores=None)[source] [edit on github]
Return the statistic and parameter samples.
- Parameters
fit (sherpa.fit.Fit instance) – This defines the thawed parameters that are used to generate the samples, along with any possible error analysis.
num (int, optional) – The number of samples to return.
factor (number, optional) – The half-width of the uniform distribution is factor times the one-sigma error.
numcores (int or None, optional) – Should the calculation be done on multiple CPUs? The default (None) is to rely on the parallel.numcores setting of the configuration file.
- Returns
samples – The array is num by (npar + 1) size, where npar is the number of free parameters in the fit argument. The first element in each row is the statistic value, and the remaining are the parameter values.
- Return type
2D numpy array | https://sherpa.readthedocs.io/en/4.14.1/mcmc/api/sherpa.sim.sample.UniformSampleFromScaleVector.html | 2022-08-07T22:57:40 | CC-MAIN-2022-33 | 1659882570730.59 | [] | sherpa.readthedocs.io |
MetricDatum
Encapsulates the information sent to either create a metric or add new values to be aggregated into an existing metric.
Contents
- Counts.member.N
Array of numbers that is used along with the
Valuesarray. Each number in the
Countarray is the number of times the corresponding value in the
Valuesarray occurred during the period.
If you omit the
Countsarray, the default of 1 is used as the value for each count. If you include a
Countsarray, it must include the same amount of values as the
Valuesarray.
Type: Array of doubles
Required: No
-
When you are using a
Putoperation, this defines what unit you want to use when storing the metric.
In a
Getoperation, this displays the unit that is used -2^360 to 2^360. In addition, special values (for example, NaN, +Infinity, -Infinity) are not supported.
Type: Double
Required: No
- Values.member.N
Array of numbers representing the values for the metric during the period. Each unique value is listed just once in this array, and the corresponding number in the
Countsarray specifies the number of times that value occurred during the period. You can include up to 500 unique values in each
PutMetricDataaction that specifies a
Valuesarray.
Although the
Valuesarray accepts numbers of type
Double, CloudWatch rejects values that are either too small or too large. Values must be in the range of -2^360 to 2^360. In addition, special values (for example, NaN, +Infinity, -Infinity) are not supported.
Type: Array of doubles
Required: No
See Also
For more information about using this API in one of the language-specific Amazon SDKs, see the following: | https://docs.amazonaws.cn/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html | 2022-08-07T22:02:43 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.amazonaws.cn |
ASPxSchedulerDataUpdatingEventArgs Members Provides data for the ASPxSchedulerDataWebControlBase.AppointmentRowUpdating event. Constructors Name Description ASPxSchedulerDataUpdatingEventArgs(OrderedDictionary, OrderedDictionary, OrderedDictionary) Initializes a new instance of the ASPxSchedulerDataUpdatingEventArgs class with the specified settings. Fields Name Description Empty static Provides a value to use with events that do not have event data. Inherited from EventArgs. Properties Name Description Cancel Gets or sets a value indicating whether the event should be canceled. Inherited from CancelEventArgs. Keys Provides access to the collection of unique IDs for the appointment which is about to be modified. NewValues Provides access to the collection of modified values for the appointment’s data fields. OldValues Provides access to the collection of values before modification of the appointment’s data fields. Methods ASPxSchedulerDataUpdatingEventArgs Class DevExpress.Web.ASPxScheduler Namespace | https://docs.devexpress.com/AspNet/DevExpress.Web.ASPxScheduler.ASPxSchedulerDataUpdatingEventArgs._members | 2022-08-07T23:04:32 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.devexpress.com |
Events Grid Setup¶
Here you will specify all the events that exist in your competition. You can work through the grid in on the webpage or set it up on a spreadsheet and copy it in. The latter is probably easier as it is simpler to edit.
Column Headings¶
- Num - Enter a number for your event(s). It can be simple as "1, 2, 3" etc or "R1, R2, R3", or you could use T&F numbers (i.e, T01, F01 etc) - whatever you want. If using letters, these MUST be capitals.
- Event code - Enter your event code. You CANNOT just write whatever you want here. Please see the bottom of this page for the complete list of event codes to use.
- Age Groups - This will default to the World Athletics or UKA defined age groups. List the age groups you want a given event to be available for. Put ALL if it is for everyone. As an example:
- U15,U17,V35,V65 - this would be eligible only for the age groups listed and no-one outside these groups could enter
- Please note you if you have ticked Categorise U20 as Senior in Config, this is for display purposes only; you still need to specify U20 in this grid to accept U20 entries.
- Gender - Similar to above but use "M" and "F". Put "MF" for both genders.
- Category - User defined. Can simply put "SW" or "WHEELCHAIR", etc. Note 1: You cannot have a duplicate event, which is defined by the same event code and category. You need to label a separate category if two events with same code.
- Name - Name your event
- Rounds - This column dictates the structure of the event and denotes whether there will be just one race or heats and finals. The number you type defines the number of heats and you can specify rounds using a comma:
- 1 round with 1 heat: 1
- 1 round with 5 heats: 5
- 2 rounds with 5 heats and 1 final: 5,1
- QF(4), SF(2), F(1): 4,2,1
- Day - The day your event is on. If your event is just over one day then just put "1", otherwise write the day it takes place. If your event has multiple rounds, specify the day of Round 1.
- R1 Time - As above, specify what time Round 1 is scheduled to take place.
- Parent - This relates to Combined Events, such as heptathlon. See the section below. Leave blank if not doing a combined event.
- Entries - This is just a simple count to see how many entries you have in a given event. Click the green "Count Entries" button for it to calculate.
Note: If you want to make sure your events are in chronological order but you are not sure of your final timetable, then just list them simply 1, 2, 3 etc. Once you timetable is finalised, click the RENUMBER EVENTS button.
Click SAVE when done.
Combined Events¶
You need to complete the "parent" column. Please complete it as per the image below.
As you can see the Decathlon has been given event "Num" 11. In the Parent column, for all events (10 in this instance) that belong to your master event (Decathlon), type the event number you gave to the master event (decathlon: 11).
Event Codes¶
We've put together a table of the event codes available in our system. These are used to specify what your event is.
N.B. Not all events are listed as it would be infinite to list every distance possible. Pay particular attention to distance events as the format is repeated, e.g. 60m, 60H, 5k, 5 miles, 20km walk, 2000m steeplechase, relays, etc
Table of Codes¶
SportsHall Codes¶
For help with the SportsHall concept and to learn how the events work and codes should be applied, please take a look at the handbook. | https://docs.opentrack.run/cms/event/eventssetup/ | 2022-08-07T23:17:02 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['https://file.opentrack.run/live/manuals/Manuals%202020/Virtual%20Racing/managevents.png',
None], dtype=object)
array(['https://file.opentrack.run/live/manuals/Manuals%202020/CMS/multievent.png',
None], dtype=object) ] | docs.opentrack.run |
Jupyter).
After.
There are two ways to access notebooks in Determined: the command-line interface (CLI) and the WebUI. To install the CLI, see Installation.
Command Line¶
The following command will automatically start a notebook with a single GPU and open it in your browser.
$ det notebook start Scheduling notebook unique-oyster (id: 5b2a9ea4-a6bb-4d2b-b42b-25e4064a3220)... [DOCKER BUILD 🔨] Step 1/11 : FROM nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04 [DOCKER BUILD 🔨] [DOCKER BUILD 🔨] ---> 9918ba890dca [DOCKER BUILD 🔨] Step 2/11 : RUN rm /etc/apt/sources.list.d/* ... [DOCKER BUILD 🔨] Successfully tagged nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04-73bf63cc864088137a477ce62f39ffe8 [Determined] 2019-04-04T17:53:22.076591700Z [I 17:53:22.075 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret [Determined] 2019-04-04T17:53:23.067911400Z [W 17:53:23.067 NotebookApp] All authentication is disabled. Anyone who can connect to this server will be able to run code. [Determined] 2019-04-04T17:53:23.073644300Z [I 17:53:23.073 NotebookApp] Serving notebooks from local directory: / disconnecting websocket Jupyter Notebook is running at:
After the notebook has been scheduled onto the cluster, the Determined CLI will open a web browser
window pointed to that notebook’s URL. Back in the terminal, you can use the
det notebook list
command to see that this notebook is one of those currently
RUNNING on the Determined cluster:
$ det notebook list Id | Entry Point | Registered Time | State --------------------------------------+--------------------------------------------------------+------------------------------+--------- 0f519413-2411-4b3c-adbc-9b1b60c96156 | ['jupyter', 'notebook', '--config', '/etc/jupyter.py'] | 2019-04-04T17:52:48.1961129Z | RUNNING 5b2a9ea4-a6bb-4d2b-b42b-25e4064a3220 | ['jupyter', 'notebook', '--config', '/etc/jupyter.py'] | 2019-04-04T17:53:20.387903Z | RUNNING 66da599e-62d2-4c2d-91c4-01a04045e4ab | ['jupyter', 'notebook', '--config', '/etc/jupyter.py'] | 2019-04-04T17:52:58.4573214Z | RUNNING
The
--context option adds a folder or file to the notebook environment, allowing its contents to
be accessed from within the notebook.
det notebook start --context folder/file
The
--config-file option can be used to create a notebook with an environment specified by a
configuration file.
det notebook start --config-file config.yaml
For more information on how to write the notebook configuration file, see Notebook Configuration.
Useful CLI Commands¶
A full list of notebook-related commands can be found by running:
det notebook --help
To view all running notebooks:
det notebook list
To kill a notebook, you need its ID, which can be found using the
list command.
det notebook kill <id>
WebUI¶
Notebooks can also be started from the WebUI. You can click the “Tasks” tab to take you to a list of the tasks currently running on the cluster.
From here, you can find running notebooks. You can reopen, kill, or view logs for each notebook.
To create a new notebook, click “Launch Notebook”. If you would like to use a CPU-only notebook, click the dropdown arrow and select “Launch CPU-only Notebook”.
Notebook Configuration¶
Notebooks can be passed a notebook configuration option to control the notebook environment. For example, to launch a notebook that uses two GPUs:
$ det notebook start --config resources.slots=2
Alternatively, a YAML file can also be used to configure the notebook, using the
--config-file
option:
$ cat > config.yaml <<EOL description: test-notebook resources: slots: 2 bind_mounts: - host_path: /data/notebook_scratch container_path: /scratch idle_timeout: 30m EOL $ det notebook start --config-file config.yaml
See Job Configuration Reference for details on the supported configuration options.
Finally, to configure notebooks to run a predefined set of commands at startup, you can include a
startup hook in a directory specified with the
--context option:
$ mkdir my_context_dir $ echo "pip3 install pandas" > my_context_dir/startup-hook.sh $ det notebook start --context my_context_dir
Example: CPU-Only Notebooks
By default, each notebook is assigned a single GPU. This is appropriate for some uses of notebooks
(e.g., training a deep learning model) but unnecessary for other tasks (e.g., analyzing the training
metrics of a previously trained model). To launch a notebook that does not use any GPUs, set
resources.slots to
0:
$ det notebook start --config resources.slots=0
Save and Restore Notebook State¶
Warning
It is only possible to save and restore notebook state on Determined clusters that are configured with a shared filesystem available to all agents.
To ensure that your work is saved even if your notebook gets terminated, it is recommended to launch all notebooks with a shared filesystem directory bind-mounted into the notebook container and work on files inside of the bind mounted directory.
By default, clusters that are launched by
det deploy aws/gcp up create a Network file system
that is shared by all the agents and automatically mounted into Notebook containers.
For example, a user
jimmy with a shared filesystem home directory at
/shared/home/jimmy
could use the following configuration to launch a notebook:
$ cat > config.yaml << EOL bind_mounts: - host_path: /shared/home/jimmy container_path: /shared/home/jimmy EOL $ det notebook start --config-file config.yaml
To launch a notebook with
det deploy local cluster-up, a user can add the
--auto-bind-mount
flag, which mounts the user’s home directory into the task containers by default:
$ det deploy local cluster-up --auto-bind-mount="/shared/home/jimmy" $ det notebook start
Working on a notebook file within the shared bind mounted directory will ensure that your code and
Jupyter checkpoints are saved on the shared filesystem rather than an ephemeral container
filesystem. If your notebook gets terminated, launching another notebook and loading the previous
notebook file will effectively restore the session of your previous notebook. To restore the full
notebook state (in addition to code), you can use Jupyter’s
File >
Revert to Checkpoint
functionality.
Note
By default, JupyterLab will take a checkpoint every 120 seconds in an
.ipynb_checkpoints
folder in the same directory as the notebook file. To modify this setting, click on
Settings
>
Advanced Settings Editor and change the value of
"autosaveInternal" under
Document
Manager.
Use the Determined CLI in Notebooks¶
The Determined CLI is installed into notebook containers by default. This allows users to interact
with Determined from inside a notebook—e.g., to launch new deep learning workloads or examine the
metrics from an active or historical Determined experiment. For example, to list Determined
experiments from inside a notebook, run the notebook command
!det experiment list. | https://docs.determined.ai/latest/interfaces/notebooks.html | 2022-08-07T23:12:18 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['../_images/[email protected]', '../_images/[email protected]'],
dtype=object)
array(['../_images/[email protected]',
'../_images/[email protected]'], dtype=object)] | docs.determined.ai |
Principle 7: Demand shaping
Demand shifting is the strategy of moving compute to regions or times when the carbon intensity is less; or to put it another way, when the supply of renewable electricity is high.
Demand shaping is a similar strategy, but instead of moving demand to a different region or time, we shape our demand to match the existing supply.
If the supply of renewable energy is high, increase the demand (do more in your applications); if the supply is low, decrease demand (do less in your applications).
A great example of this is video-conferencing software. Rather than streaming at the highest quality possible at all times, they often shape the demand by reducing the video quality to prioritize audio.
Another example is TCP/IP. The transfer speed ramps up in response to how much data can broadcast over the wire.
A third example is progressive enhancement with the web. The web experience improves depending on the resources and bandwidth available on the end user's device.
Carbon-aware vs. carbon-efficient
Carbon efficiency can be transparent to the end user. You can be more efficient at every level in converting carbon to useful functionality, while still keeping the user experience the same.
But at some point, being transparently more carbon-efficient isn't enough. If the carbon cost of running an application right now is too high, we can change the user experience to reduce carbon emissions further. At the point the user is aware the application is running differently, it becomes a carbon-aware application.
Demand shaping carbon-aware applications is all about the supply of carbon. When the carbon cost of running your application becomes high, shape the demand to match the supply of carbon. This can happen automatically, or the user can make a choice.
Eco-modes
Eco-modes are often used in life; for instance, in cars or washing machines. When switched on, the performance changes as they consume fewer resources (gas/electricity) to perform the same task. It's not cost-free (otherwise, we would always choose eco-modes), so we make trade-offs. Because it's a trade-off, eco-modes are almost always presented to a user as a choice, and the user decides if they want to go with it and accept the compromises.
Software applications can also have eco-modes, which when engaged change application behavior in potentially two ways:
Intelligence. Giving users information so they can make informed decisions.
Automatic. The application automatically makes more aggressive decisions to reduce carbon emissions.
Summary
Demand shaping is related to a broader concept in sustainability, which is to reduce consumption. We can achieve a lot by becoming more efficient with resources, but at some point, we also just need to consume less. As Sustainable Software Engineers, to be carbon-efficient means perhaps when the carbon intensity is high, instead of demand-shifting compute, we consider canceling it, thereby reducing the demands of our application and the expectations of our end users.
Need help? See our troubleshooting guide or provide specific feedback by reporting an issue. | https://docs.microsoft.com/en-us/learn/modules/sustainable-software-engineering-overview/9-demand-shaping?WT.mc_id=green-9537-cxa | 2022-08-07T22:35:02 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.microsoft.com |
Optimizing an online shop for the top spot among search engines is allegedly feasible for a professional. Defending this position against the competition in the long term, is slightly more elaborate, as they too are optimizing.
This SEO guide covers the basic SEO functions of Shopware 5. You will find important tips & tricks along with valuable explanations of the new responsive template. This guide serves as your basis for optimizing your shop for search engines for the long haul.
Search engine ranking takes more than 100 factors into account, making it impossible for this guide to explain every intricate detail. The “Tips & Tricks” section deals with specific ranking factor criteria and how these can be checking using Shopware 5’s SEO tools.
Bad ranking is the product of inferior page content. In the worst case scenario, a site includes duplicate content, broken links, poor-quality backlinks, excessive keyword use and slow delivery time.
The new responsive template provides a separate XML sitemap for mobile devices. The standard version of Shopware 5 gives you the tools you need for creating top-notch SEO page content. Furthermore, the responsive template caters to the Google crawler, Ajax, which checks your website for mobile-friendliness.
With the new responsive theme, all pages that are relevant for search engines have been completely optimized on-page. Tags which are unnecessary or no longer SEO- relevant have been removed – existing tags have been improved following the new SEO standards.
You can either adjust your URLs automatically or individually using the new SEO router engine. The Shopware 5 SEO engine gives you complete flexibility for setting up SEO URLs. The settings can be adjusted in the backend – there is no need to change the source code or template.
Since April 21, 2015, mobile optimization has been an important factor for search engine ranking. Since this update, Google actually punishes sites for not being mobile-friendly – they want their users to get only high-quality search results.
This has a massive impact on search queries. The Shopware 5 standard template supports this new requirement 100%, since it is completely responsive out-of-the-box.
You can check if your website is responsive using Google's Mobile-Friendly test.
The Shopware 5 responsive template makes the Google webmaster tools jump for joy. The following screenshot shows Google’s expectations for responsive optimization on mobile devices. Small font sizes and selectable elements that are placed too close are now a thing of the past with the Shopware 5 responsive template.
After switching to the Shopware 5 template, you will instantly see a noticeable improvement in your shop. Google will be unable to find errors regarding usability for mobile devices. This not only has a positive influence on your conversion rate for mobile devices, but also leads to a better search engine ranking for your online shop.
Here is an excerpt from the Google Webmasters account. This statistic shows the amount of errors and issues relating to usability on mobile devices. After switching to the new Shopware 5 template, Google no longer has any mistakes or suggestions to report. This on its own has a positive effect on your search engine ranking.
The again optimized SEO router of Shopware 5 provides all possibilities to offer SEO-friendly URLs in all variations to the search-engines. The intelligent Shopware SEO router saves the history of every URL. We explain to you what to consider when using and generating those URLs and how to build them in a Google-friendly way.
If a URL changes (for example when a product name has been changed) Shopware 5 memorizes the "old" item URL and redirects this per Redirect 301 to the new URL.
It makes no difference if the bot accepts cookies or not. There is a clean Redirect 301 in any case. Furthermore the canonical url of the goal site is displayed on the redirected site. By this Shopware makes sure there is no duplicate content recorded on the site.
This means the new SEO router guarantees that all in Shopware generated URLs are accessible for the crawler. Google will definitely make this worthwhile.
This chapter deals only with the creation of individual SEO URLs. By providing Smarty within the URL-templates you can form your SEO URLs completely variably.
If you want to realize any of the tutorials in this chapter we explicitly advise you to test your changes in a staging environment first. If you change the SEO URLs live in your productive environment, the changes of your Shop URLs could instantly affect your search engine ranking.
Google SEO Chef Matt Cutts casually mentioned in a video-web how to build the URLs and that an optimal URL should contain a maximum of 3-5 words and should not be longer than 60 characters. Otherwise the keywords within the URL will be ignored. So you have to be sure that your product URL is built according to the suggestion of Google. Shopware 5 offers the optimal solution for this.
In a videocast about how to build URLs, Google SEO Guru Matt Cutts offers other useful hints and tips for a perfect URL.
If you want to implement one of the tutorials in this chapter, we strongly suggest that you check your changes first within a staging environment. If you create the SEO URLs live in your productive environment, it could be possible that your changes in your shop URLs will have immediate negative or positive impact on your search engine rankings.
Moreover you should consider, that any new created URL results in a new entry in the s_core_rewrite_urls-table. If for example you have a shop with 50,000 items and generate an adjusted URL 3 times, there will be 150,000 entries generated in your database. This would mean a total of 200,000 entries! Unnecessary entries should be avoided to keep your database light and tidy.
Never ever empty the table s_core_rewrites_url. Otherwise you`ll loose all known 301 redirects.
Note that more than 5 redirects can cause problems indexing the urls by the bots.
Shopware uses slugify rules can be modified via a separate plugin and adapted according to your wishes.
In the Devdocs of Shopware you will find useful hints and a sample plugin for the implementation of such a plugin.
Link:
Link:
This means that you are able to create your own rules for replacing special characters.
With this jigsaw piece you have at least total control over every single URL in the frontend which is collected by the search engines.
The standard product URL in Shopware 5 looks like this: base-category/sub-category/item-id/item-name. You should ensure that the build of your product-URL corresponds to the wishes of Google (mentioned in the video above). Shopware 5 provides the optimal solution for this.
We recommend Professional SEO optimizers to create the URL individually per article. The SEO router settings in the backend enable you to completely individualize your product URLs. The standard SEO template for the item detail page looks like this:
{sCategoryPath itemID=$sArticle.id}/{$sArticle.id}/{$sArticle.name}
Shopware provides almost endless possibilities to customize the URL. For example the category-progress can be completely removed from the URL. We advise that professional SEO optimizers should create the URL individually per item. Here Shopware provides the free text fields. Those free text fields can be edited in the backends master data mask.
Here an example for the request of the free text field named attr4:
{if $sArticle.attr4} {$sArticle.attr4} {else} {$sArticle.id}/{$sArticle.name} {/if}
In this example we request the item free text field attr4. If this is filled with your own URL, the Shopware SEO engine recognises the content and displays it in the frontend as the URL. If the free text field is not filled in the item master data, the explicit item-id plus the item name is used to generate the link.
Alternatively you could also use the field title in the meta-information of the item to generate your item URL. Here is an example, which generates a descriptive URL using the field title in the item master data mask. If this field is empty, a fall-back to the item name takes place.
{if $sArticle.metaTitle}{$sArticle.metaTitle}{else}{$sArticle.name}{/if}
Generally we advise using the item-id in the URL, because that is a continuous, explicit attribute which ensures that an unique SEO link can be generated for each item.
Link: Changing/individualising of search-engine-friendly URLs
The standard adds an ?c=XX to the URL to accurately identify the category the item is assigned to. If this is not wanted you can remove this identifier by using the setting "Remove category-id from URL" in the SEO router backend.
Shopware 5 offers every possibility to create a category URL which corresponds to your wishes or the wishes of your SEO agency. In the standard, the category URL is structured as follows: category1/category2/category3. However from the point of view of an SEO expert, in case of a high category nesting, there is a risk that the category URLs at the lowest level will unintentionally become too long.
Shopware 5 has made major efforts to replace long URLs with short URLs. Thus the Shopware category URLs are optimally captured by the search engine crawlers and displayed in the search results accordingly as clean.
In this example we record the desired URL in the category free text field named attribute1 for the selected category: my-optimized-category-url
{if $sCategory.attribute.attribute1} {$sCategory.attribute.attribute1} {else} {sCategoryPath categoryID=$sCategory.id} {/if}/
By this the content of the free text field attribute1 category is displayed as the category URL. In case the category-free text field attribute1 is empty, the SEO URL build is done with the old scheme:
An individual blog SEO URL, which we implemented on demand from the Shopware community, you get the possibility to wrap your Blog content in an even more interesting way for search engines.
The blog URL is generated like this in the standard: You can change the URLs by only small adjustments to build it like this:. First you have to go to the SEO router settings and deposit the selected smarty request in the field SEO URLs blog-template.
In this example the SEO URL of the blog entry is displayed and cut short after 50 characters.
{$blogArticle.title|replace:"/":"-"|truncate:"50":false}
Not only article and category URLs can be individualized. Shopware 5 also offers the possibility to create a special URL for each manufacturer page by changing the SEO template within the SEO router settings. See Shopware 5 Wiki articles about manufacturer pages.
This chapter will focus on the customization of the Supplier URL. For example, you can place an additional keyword into the URL for short supplier names. For example the standard Supplier-URL changes to.
First, call the Free text management in the backend and create a new text field called attr1 for the manufacturer table (s_articles_supplier_attributes). Now you have to query the new manufacturer free text field within the SEO router settings;
{if $sSupplier.attributes.core.attr1} {$sSupplier.attributes.core.attr1} {else} {createSupplierPath supplierID=$sSupplier.id|replace}/ {/if}
If the manufacturer free text field is filled, the supplier URL will be generated by using this content. Otherwise, the SEO router generates the standard supplier URL from the supplier name.
The URL for a static shop page can be assigned via free text fields instead of using the automatically generated standard URL. This is an example of an SEO router query using a free text field named attr1. As usual, the free text field attr1 (plain text) had to be added for the table shop page (s_cms_static_attributes). After creating the free text field, the field attr1 in the SEO template for the shop pages had to be queried:
{if $site.attributes.core.attr1} {$site.attributes.core.attr1} {else} {$site.description} {/if}
If e.g. the free text field attr1 of the shopsite disclaimer is filled with the text website-disclaimer, the URL appears instead of after generating the SEO URLs vie the performance module.
By default non-SEO-relevant sites like register, notepad or my account or filtered listing sites are delivered with the robots-tag noindex, follow. For example, if you use the filter in the listings, the URL will be expanded by the identifier f. To avoid duplicate content those sites are delivered with the robots Tag noindex,follow by default.
sPage,sPerPage,sSupplier,sFilterProperties,p,n,s,f
To prevent those sites from being crawled accidentally, we advise you not to change this standard.
Many shop operators fail to create individual product descriptions and copy the texts from the producer sites because of time issues. If you fail to create unique content you have to consider that sometimes you will copy external links along with the product text. Most search engine providers rate those backlinks and take this as an important criteria of the links’ popularity or the relevance of a site.
Shopware 5 makes sure that only the really desired backlinks will be recorded by the search engines. In order not to pass your popularity on to other unknown websites. all external links are delivered with a nofollow.
If you want to raise your shop’s popularity by using high quality backlinks, you can add those backlinks to the SEO-follow backlink list, separated by a comma. Here it is essential to have one link separated by comma per column. The last backlink must not end with a comma.
We advise you to be careful about unblocking backlinks. You should only unblock a backlink if you are 100% sure that you will get a high-quality backlink back. Every other procedure does not make sense from the SEO-technical point of view.
You should only release backlinks if you are 100% sure that this link is a trustworthy and 100% reachable site. From an SEO-technical point of view everything else makes little sense.
Link: Google explains how to use nofollow for links
Sometimes it makes little sense to individualize the title for a site. For example if you use a high category interlacing in your shop you could easily exceed the title-tags recommended pixel-number.
Shopware 5 provides the optimal solution to display the perfect title tag for product-, blog- and category-sites by default. Please notice that you can use Smarty within the Shopware template-structure.
This means you could for example use count_chars . You are not bound by the specifications of the standard title tags. By using free text fields in combination with small template adjustments, you can create your own title-tags. So you do not need any extra plugins.
It is not necessarily guaranteed that Google will actually use the title tag provided in the template. This circumstance will be explained by Google SEO expert Matt Cutts:
This example shows how to display your own title tag by using the free text field 9 item. The block frontend_index_header_title is responsible for this..
{strip} {if $sCategoryContent.attribute.attribute1} {$sCategoryContent.attribute.attribute1} | {config name=sShopname} {elseif $sArticle.attr1} {$sArticle.attr1} | {config name=sShopname} {else} {if $sBreadcrumb} {foreach from=$sBreadcrumb|array_reverse item=breadcrumb}{$breadcrumb.name} | {/foreach} {/if} {config name=sShopname} {/if} {/strip}
To avoid overlong category URLs you can alternatively use a free text field category for the category-listing. Here you again have to deduce the block frontend_index_header_title. This example shows how the free text field attribute2 category can be used as an individual category title tag:
{strip} {if $sCategoryContent.attribute.attribute2} {$sCategoryContent.attribute.attribute2} {elseif $sCategoryContent.title} {$sCategoryContent.title} {else} {$smarty.block.parent} {/if} {/strip}
Shopware 5 already displays the blog items SEO-optimized by default. If you prefer a different title than the saved one, you can use one of the 6 free text fields of the blog-item.
For this the block frontend_index_header has to be deviated:
{strip} {if $sArticle.attr1} {$sArticle.attr1} {elseif $smarty.block.parent} {$smarty.block.parent} {else} {if $sBreadcrumb} {foreach from=$sBreadcrumb|array_reverse item=breadcrumb}{$breadcrumb.name} | {/foreach} {/if} {config name=sShopname} {/if} {/strip}
First a check is made whether there is a URL saved in the blog item free text field 1. If that is not done, the title saved in the on-page optimization is checked. If that one is also not set there is a fallback to the breadcrumb.
The Google image search is probably the most underestimated possibility to get more clicks on your shop with little effort.
Those small but valuable possibilities are exploited perfectly by the new Shopware 5 responsive theme. Images are displayed search engine-optimized on every SEO-relevant template by default. In this way you can be sure to not miss any little possibility to improve your ranking.
A few helpful tips for the daily usage of Shopware in terms of Google image search follow.
As the saying goes: "Many a little makes a muckle." In this way we want to suggest an easy to implement possibility of placing useful keywords in the foreground, for example for the Google image search.
Please remember to use unique and not too long file-names for the image-upload. Ideally you name the file after the product or category it is related to or you just put 1-2 searchengine-relevant keywords in the image’s name.
The image’s name will automatically appear in all templates source codes. This means the image will be released with exactly this file name (1) in the site’s source code and makes it possible for the search engines to crawl the image by its optimized name, for example fitting for each item, category or shopping world.
Just as with the creation of useful URLs, Google recommend separating keywords within the image name with a separator instead of using an underlining (see video in the section SEO URL`s).
Please remember to use a unique file name. In this way you guarantee a perfect capture of the file name by search engines.
The SEO-relevant alt tag of images will always be displayed in the onpage-optimized standard themes of Shopware 5. Remember to set the field Title (1) for the product images. Work with a sense of proportion and do not set too many or too long keywords.
If the image is not a background image displayed via .css but via .img the alt tag of the item image automatically appears within the shop themes. Do not be confused by the alt tag of a link, because that can overwrite the alt tag of an image.
SEO experts have been discussing for years whether the title tag has an SEO-relevant meaning for images. Shopware 5 provides the possibility to include a different title-tag within the .img call on the item detail page.
There are 3 free text fields available, so you can for example use the free text field 1 for a different title tag (1).
The free text field is called up within the theme like this:
{$sArticle.image.attribute.attribute1}
Since April 2017 Google offers a new feature for the Google Image Search. Google will offer similar items automatically.
The similar items (1) will only appear if the required Schema.org-snippet is issued on the article details page.
In the Responsive Theme up to SW5.2.22 the required Schema.org-snippet has been implemented. Until now Google offers this feature in the mobile search only.
There is an additional sitemap besides the normal sitemap.xml especially for mobile devices in Shopware 5. In this way you can show the Google crawlers unmistakably that the recorded URLs of the sitemap are only for mobile devices. This is explicitly requested by Google.
The Shopware sitemaps only contain SEO-relevant sites like item detail -pages, categories and the accounting area or the shopping cart pages (noindex by default) will also not be delivered to the sitemaps. This guarantees in Shopware a clean index structure and only offers those sites to the crawler which are interesting for the search engine providers and are declared as index, follow. Shopware does not give the bots unnecessary crawling synergies and delivers clean structured sites. This circumstance will be definitely be rewarded with a better ranking by the search engines.
Furthermore the creation of subshop-specific robots file by adding blocks has been simplified a lot. So you can submit your own robots.txt for each individual subshop and expand it as you wish (e.g., shut out bots, etc.).
Never ever delete the original entries in the robots.txt. Otherwise the bots will crawl system-paths which further on will be appear in the search results.
The sitemaps purposely do not contain any variant or filter URLs.
SEO-URLs for variants in Shopware 5 can be generated t-url URL when the variant URL is called up. Duplicate content cannot appear like this!
The same handling is done for the filter URLs. Once the item URL is extended by a filter suffix (e.g., myitem/?p=1&o=1&n=12&max=425.00&s=1) Shopware automatically delivers a noindex,follow in the source code. In this way the filter URLs are no longer relevant for search engines and will not appear in the sitemap.
In this way Shopware prevents the search engines from having to unnecessarily crawl more URLs.
Shopware automatically generates a sitemap in XML format. This sitemap can be used for all relevant searchengine providers (e.g., Google Webmaster tools). However Shopware cannot guarantee that every URL will be crawled and indexed. This always depends on the search engine provider.
The sitemap can be displayed via the following link:
This sitemap has been used in the past for feature phones with WAP browsers. If you use a frontend theme with WAP pages, you should to offer the mobile sitemap.
If you use the standard Responsive/Bare theme we strictly recommend to offer the sitemap.xml in Google Search Console.
The sitemap for wap-devices can be displayed via the following link:
Categorie:
[categories] => Array ( [0] => Array ( [id] => 5 [parentId] => 3 [name] => Genusswelten [position] => 0 [metaKeywords] => [metaDescription] => [cmsHeadline] => Genusswelten [cmsText] => Ivi praebalteata Occumbo congruens seco, lea qui se surculus sed abhinc praejudico in... [active] => 1 [template] => article_listing_1col.tpl [blog] => [path] => |3| [showFilterGroups] => 1 [external] => [hideFilter] => [hideTop] => [noViewSelect] => [changed] => DateTime Object [added] => DateTime Object [media] => [attribute] => Array ( [id] => 3 [categoryId] => 5 [attribute1] => [attribute2] => [attribute3] => [attribute4] => [attribute5] => [attribute6] => ) [childrenCount] => 3 [articleCount] => 44 [show] => 1 [urlParams] => Array ( [sViewport] => cat [sCategory] => 5 [title] => Genusswelten ) ) )
Article:
[articles] => Array ( [0] => Array ( [id] => 2 [changed] => DateTime Object [urlParams] => Array ( [sViewport] => detail [sArticle] => 2 ) ) )
Blog article:
[blogs] => Array ( [0] => Array ( [id] => 3 [category_id] => 17 [changed] => DateTime Object [urlParams] => Array ( [sViewport] => blog [sAction] => detail [sCategory] => 17 [blogArticle] => 3 ) ) )
Shop pages:
[customPages] => Array ( [0] => Array ( [id] => 1 [tpl1variable] => [tpl1path] => [tpl2variable] => [tpl2path] => [tpl3variable] => [tpl3path] => [description] => Kontakt [pageTitle] => [metaKeywords] => [metaDescription] => [html] => Fügen Sie hier Ihre Kontaktdaten ein [grouping] => gLeft|gBottom [position] => 2 [link] => shopware.php?sViewport=ticket&sFid=5 [target] => _self [changed] => DateTime Object [parentId] => 0 [attribute] => [children] => Array [urlParams] => Array ( [sViewport] => ticket [sFid] => 5 ) [show] => 1 ) )
Supplier pages:
[suppliers] => Array ( [0] => Array ( [id] => 1 [name] => shopware AG [image] => [link] => [description] => [metaTitle] => [metaDescription] => [metaKeywords] => [changed] => DateTime Object [urlParams] => Array ( [sViewport] => supplier [sSupplier] => 1 ) ) )
Landingpages:
[landingPages] => Array ( [0] => Array ( [0] => Array ( [id] => 5 [active] => 1 [name] => Stop The Water While Using Me [userId] => 60 [containerWidth] => 1008 [validFrom] => [isLandingPage] => 1 [landingPageBlock] => leftMiddle [landingPageTeaser] => media/image/testbild.jpg [seoKeywords] => [seoDescription] => [validTo] => [createDate] => DateTime Object [modified] => DateTime Object [showListing] => [gridId] => 2 [templateId] => 1 [attribute] => ) [categoryId] => 6 [show] => 1 [urlParams] => Array ( [sViewport] => campaign [emotionId] => 5 [sCategory] => 6 ) ) )
The good old RSS feed also has its right to exist in Shopware 5. Some search-engine providers support RSS feeds. This provides the possibility to crawl blog entries separately, for example.
Those RSS feeds are not only interesting for search engines. RSS feeds can also be embedded into many blog or CMS systems. This can be very interesting if you want to build backlinks. To better operate those incoming links, you should think about which areas of your shop you want to offer within the RSS feed.
Shopware 5 provides a separate RSS feed within every category and blog listing. This you can display as follows:
The RSS feed is displayed like this in the source code:
<link rel="alternate" type="application/rss+xml" title="Sommerwelten" href="" />
Besides the RSS standard, Shopware 5 provides a feed in the Atom format. This behaves just like the RSS feed, so it is available on the listing sites. The Atom feed can be displayed like this:
The Atom feed is displayed like this in the source code:
<link rel="alternate" type="application/atom+xml" title="Summertime" href="" />
Web crawlers read the file robots.txt even before the real shop URL is called, because of the Robots Exclusion Protocol. Via this file we can choose which URLs are allowed to be crawled by the search engine bot and which are not.
By Shopware default the robots.txt is already optimized and does not need any changes. The robots.txt of Shopware 5 is created dynamically from the template, so the template can be changed without any problems. You can provide an individual robots.txt for each of your subshops by using different templates.
You can add your changes to the following template blocks:
{block name="frontend_robots_txt_user_agent"} {block name="frontend_robots_txt_disallows"} {block name="frontend_robots_txt_sitemap"}
It can happen that crawlers find links within your site content, which lead for example to pdf or word-documents. Please ensure that the content of those documents is captured by search engines, so the content might be defining for the quality of your shop content.
If you want to hide your documents from the search engines you can easily do that by changes within the Shopware 5 template. In this example all pdf-files which would be available for the crawler by default, will be hidden from the search engines.
Extend the file /frontend/robots_txt/index.tpl and add the following changes to the block frontend_robots_txt_disallows:
Disallow: /*.pdf$
Link: Usefull tips from Google regarding the robots.txt
You should definitely remember to register all active host aliases in the shop settings. Host aliases are URLs which reference directly to the installation path of Shopware.
If you redirect your host aliases via 301 redirect to your shop URL, Shopware will not be able to recognize your host aliases. If the host aliases are not routed to the installation path of Shopware 5 properly or not all host aliases are saved, Shopware might not be able to set the follow and nofollow properly..
Link: to the documentation of the shop-settings.
SSL has become a big issue since the revelation scandal about the Ex-CIA employee Edward Snowden. After the revelations about the intelligence agencies you cannot be sure anymore about how safe an SSL-encryption is.
Nevertheless Google suggests that you save your shop by using a safe SSL certificate. Because of this we suggest encrypting your whole frontend via SSL. Valid SSL certificates can be normally be ordered from your host. You have to activate the setting use SSL in the shop settings.
Remember to set the default snippets for the meta description and the meta keywords. Otherwise you lose a lot of SEO-relevant information for the search engine crawlers.
The following snippets have to be set via the snippet administration in the Shopware backend:
Link: Documentation of category-administration
Store a meaningful title tag, meta description as well as meta keyword.
The Shopware 5 blog module provides the ideal possibility to present SEO-relevant texts. This is an opportunity you should definitely use. We suggest you set a divergent title tag and meta description for each blog item. The settings can be found in the menu on-page optimization.
Additionally you can promote your blog items through SEO-relevant items.
Work by sense of proportion for the headline of your blog text. For example do not double set the h1 tag, because it is already used by the default template.
Hints for creating blog-entries with Shopware.
Shopware provides special SEO settings for landing pages like its own SEO description and a freely selectable SEO URL by the landing page name. In this way you can place your shopping worlds and landing pages optimized for search engines.
Use the html block for the landing page to deposit SEO-optimized texts. Set the headlines in this block by sense of proportion. For example always set the h1 tag only once and always at the top of the landing page.
Link: To the documentation of landing pages in the shopping worlds
You can also set title, description, and keyword for shop sites independently of the content. Theoretically you can provide your forms ideally to the search engine crawlers.
Basically you should check if your shop site content is even SEO-relevant. Does it make sense to let your control of general terms and conditions be captured by the search engine?
Shopware 5 provides the possibility to set your shop sites to noindex, follow in the SEO router settings. In this way the search engines can only concentrate on your SEO-optimized items and category pages, as well as the shopping worlds.
Link: SEO optimized shop sites in Shopware 5
A major advantage of Shopware are the SEO possibilities for producer sites. If you have many items from a specific producer, you should think about optimizing the relevant producer site.
There are the usual SEO fields available in the backend producer module. We suggest that you use this option and place SEO-optimized text for the producers. Alternatively you can save an individual SEO URL here.
This is the matrix for using canonical urls in Shopware 5 when activating the rel==prev/next-Option in the SEO-Router-Settings:
Der canocial-Tag will be published on index,follow-Sites only.
Within a paginated site Shopware can work with rel=next/prev in the area if the setting in the SEO settings (Index paginated content) is activated in the SEO settings. In this way the control within the indexation is shown to the Google crawler.
Basically paginated sites are content-based but of no importance to the search engines. Because of this paginated sites, which export an rel/next, are provided with a "noindex,follow" (Also look for Matrix for the usage of canonical urls in the next entry).
The search engines continue crawling the paginated sites (f.ex. ?p=1), but don't capture them anymore. Thus Shopware avoids giving duplicate content to the search engines because of paginated sites by default.
If you offer search engine exclusive content. a noindex,follow automatically is set for all listing subpages. This behavior is particularly recommended by Google. Thus for example the following meta tag is set for all subpages of categories or filters (page 2, page 3, etc.) :
<meta name="robots" content="noindex,follow" />
Those listing-subpages will get captures by Google but not indexed. If this behavior is not desired you can activate the option "Index paginated content" in the new Shopware 5 SEO-router.
If you do not wish this behavior you should be careful about the indexation. If you release the paginated sites by search engines you have lots of new sites in the search index, but could possibly suffer from this by a downgrading of your sites within the search results because of duplicate content. To avoid this, you should place the identifiers in Google Webmaster tools. (Anker-Link innerhlab dieses Artikels.).
If you use language subshops, you should clearly set the XHTML-namespace for the respective language subshops. This is done by changing the snippet IndexXmlLang.
Within the default templates rich snippets are provided in Shopware by using the schema.org markups. You can check the state of your sites with the Google test tool for structured data
Caution: Whether the provided markups also appear on the SERP will be decided by the owner of the search engines and unfortunately not by shopware.
On the item detail page the following markups (schema.org) get displayed:
WebPage:
BreadcrumbList:
itemListElement (url, name, position, item)
SiteNavigationElement:
url name
Product:
name image productID sku description aggregateRating (ratingValue, worstRating, bestRating, ratingCount) brand (name) weight (name) offers (priceCurrency, price, availability) review (datePublished, name, reviewBody, reviewRating, author)
Example for a product site including evaluation (View in the Google search results):
Please ensure that, "in Stock" is only set if a positive stock and a delivery time in days is stored.
The following snippet-groups are read out:
WebPage:
BreadcrumbList:
itemListElement (url, name, position, item)
This element is not available on the starting page of your shop.
SiteNavigationElement:
url name
Example for a category site (View in the google search results):
In this chapter we show you some tips & tricks which help you to better place your Shopware installation in the search engine results. We strongly suggest that the realization of the separate tips as well as the usage of the recommended tools is no guarantee of a better ranking.
If you do not have any experience in the area of SEO optimization, we suggest that you work with a professional SEO agency. Also remember the fact that the recommended tools might cause a very high server workload when crawled. The inappropriate usage of this tool can eventually cause complete failure and the inaccessibility of your server for smaller systems.
Our Technology Partner Ryte offers an awesome onpage analyse tool for your shopware shop. Ryte offers almost all needed analysis options and tools for the sustainable onpage optimization of your shop.
Ryte helps you to optimize your content and thus generate more sales and more visitors to your shop. The tool detects duplicate content, wrong h-tag hierarchies and checks many other important ranking factors.
Ryte offers an interface to your Google Accounts (Search Console and Google Analytics). This means that your Ryte console has fully control to almost all important google datas.
The Ryte analyse tool is an allround talent made for seo-beginners and professionals. The SEO tool supports Ajax content and thus the shopware shopping worlds.
Link: Onpage-Analyze Tools especially for online-shops based on shopware
Rytes supports shopware shopping worlds in contrast to many other seo analyzis tools. To use this feature you have to set the option Analysis User-Agent to Googlebot (Media) in the project settings of your Ryte account.
This chapter deals with how you can exhaust the topic of SEO optimization by using the default functions and smaller template changes. Please keep in mind that some possibilities in this chapter can only be done by some smaller template changes and might be hard to implement for comparatively inexperienced users.
Adjusting the advanced menu
The link "learn more" gets displayed through the snippet "learnMoreLink" in the advanced menu. You can change this URL as well as the titletag of the anchor. An example of this:
*'''old:''' learn more *'''new:''' switch to {$mainCategory.cmsHeadline}"
Adjusting the comment link in the blog listing
There is a follow-up URL to the blog comments in the blog listing. The title tag can be optimized for search engines by changing the snippet BlogLinkComments. Go to the snippet administration in the backend and search for the snippet BlogLinkComments. You can use Smarty in this snippet so all arrays inserted in the blogtheme are available.
Example:
*'''old:''' Comments *'''new:''' To the comments of the item $sArticle.title}
Adjusting of the supplier link on the item detail page
The link to the producer site is displayed on the item detail page. This link consists of the snippet frontend/detail/description/DetailDescriptionLinkInformation (1).
If needed the snipped can be edited in the snippet administration in the backend.
Ideal URL-structure for items with multiple categorization
If your item is assigned to multiple categories (1) in your shop, Shopware 5 provides the possibility to influence the SEO URL output. In the item master data you can enter the ideal SEO URL for each shop in the window SEO categories of the item (2).
This provides the possibility to generate an SEO-relevant item URL, which fits the optimized keywords ideally. For every shop an ideal SEO URL can be generated for the item page.
Hide forms from search engines
By default Shopware 5 allows all forms to be indexed and released in the searchresults. You should think whether it makes sense for your forms to be crawled from the SEO-technical point of view.
If you do not want Google to crawl your forms, for example,, you have to enter the variable forms in the mask SEO-noindex viewports within the SEO-router settings in the Shopware backend. In this way,. all forms will be delivered with a noindex,followand will not be crawled.
Additional category descriptions in the category-listing footer and the shopping worlds
A little template adjustment makes it possible to offer additional search engine-relevant text below the listing by using the category free text fields. The content is completely recorded by the search engines. For this you only have to extend the block frontend_listing_bottom_paging.
Here we introduce you to the rudimentary setting and analysis possibilities of Google Webmaster tools.
Non-available sites - 404 or 410?
Shopware 5 provides the possibility to offer a differing state code for sites no longer available to the crawlers. By default the crawlers get a 404 for non-available sites. Alternating you can for example return a 410-Code (1) in Shopware 5.
Thus the search engine-crawler can recognize that the URL has been permanently removed and will not be available in the future.
You can change the HTTP state code in the SEO router settings to your liking:
An overview of available state codes you can find in the Google Webmaster tools Wiki or in the state Codes and protocol definitions of WC3.
Additionally you can enter your own "site not found" site in the SEO router settings. Therefore it is possible to display a special landing page including an individual error description.
Setting up Google Analytics & Adwords Conversion Tracking
Since Shopware 5 the Google-Plugin SwagGoogle is not delivered by default. We provide the Google Plugin for Shopware 5 on Github. The Shopware 5 Google Plugin can be downloaded there for free.
The plugin adds the requested source code of Google as well as optionally the Google Adwords Conversation to the frontend theme.
In the backend plugin configuration the Google Analytics ID (1)”'; and optionally the Google Adwords Conversion ID (2) can be entered. Alternatively you can choose between the normal Google Analytics and the Universal Analytics Tracking Code (3).
The current version of the plugin can always be found in the ShopwareLabs on github. Please mind that the plugin is not offered in the Shopware store. The plugin is not supported by shopware at all.
Alternatively you can use the Google Tag Manager. In this case you normaly use only one static code snippet. Through this code snippet recognized all interactions which will be triggered from your Google Analytics / Adwords account. Thus no further plugin is needed.
URL parameters in Google Webmaster tools
Furthermore you can exclude URL search parameters in the Google Webmaster tools. This only makes sense, if, for example because of the usage of foreign plugins, the crawler captures URLs, which are not to be captured, or cause duplicate content.
By Shopware 5, default sites including search parameters have indirectly been declared as noindex,follow. Thus you avoid possible crawling mistakes caused by dirty external links in advance.
Possible examples for the search parameters to exclude:
Parameter,Effect,Crawling c,Other,Only crawl representative URLs sInquiry,Other,Let Googlebot decide sOrdernumber,Other,Let Googlebot decide p,Changes,Reorders, or narrows page content,Let Googlebot decide number,Changes,Reorders, or narrows page content,Let Googlebot decide n,Seitenauswahl,Let Googlebot decide o,Changes,Reorders, or narrows page content,Let Googlebot decide s,Changes,Reorders, or narrows page content,Let Googlebot decide a,Other,Let Googlebot decide __cache,Changes,Reorders, or narrows page content,No URLs cf,Changes,Reorders, or narrows page content,No URLs f,Changes,Reorders, or narrows page content,Let Googlebot decide l,Other,Only crawl representative URLs max,Changes,Reorders, or narrows page content,Let Googlebot decide min,Changes,Reorders, or narrows page content,Let Googlebot decide q,Changes,Reorders, or narrows page content,Let Googlebot decide sAction,Other,Only crawl representative URLs shippingFree,Changes,Reorders, or narrows page content,No URLs sPage,Changes,Reorders, or narrows page content,No URLs sPartner,Other,Only crawl representative URLs sPerPage,Changes,Reorders, or narrows page content,No URLs sSort,Changes,Reorders, or narrows page content,Let Googlebot decide t,Other,Only crawl representative URLs u,Other,Only crawl representative URLs v,Other,Only crawl representative URLs sFilterTags,Changes,Reorders, or narrows page content,Let Googlebot decide sFilterDate,Changes,Reorders, or narrows page content,Let Googlebot decide sFilterAuthor,Changes,Reorders, or narrows page content,Let Googlebot decide
Detailed information about the Configuration of search parameters you can find in the Google Webmasters central-Blog.
Hreview-aggregate snippet for the front page
Use the possibility to emphasize your products in the Google search result list.
Please remember that this tutorial is an experimental attempt, to display such a micro-format in the theme. Google alone decides how your products are displayed. Thus we cannot guarantee that the product assessments will be shown in the search results of Google.
Additional sitemap for images, news etc.
Shopware provides a normal and a mobile sitemap by default. If you want an additional sitemap for your images, you can create this with a little script independently from Shopware and store this in Google Webmaster tools.
In this paragraph we deal with general but fundamental tips for the topic of search engine optimization. Those tips explicitly do not relate to Shopware.
Avoidance of duplicate content for the item descriptions
One of the cardinal sins for SEO are item descriptions, which have already been captured on other websites. Because of that you should always work with unique item descriptions. Sometimes you cannot avoid using an item description from the producer website. These item descriptions are of course used by a lot of your competitors as well.
Here you should think about setting these item pages to noindex and follow. Thus you tell the search engine crawlers not to enter this item into the search engine index but to continue crawling the site itself.
Alternatively you can use text agencies to create your item descriptions.
A healthy proportion of text and code (Text-to-Code-Ratio)
Make sure that your product description is not too short. The Text-to-code-ratio contains the proportion of Shopware source code and the text of the item description. Search engine optimizers suggest that the percentage of product description should be at least 25% of the source code.
Meaningful content
It is not sufficient to provide only a unique content. The content of course has to be equally meaningful and appealing to search engines and customers.
Avoiding h1 tags in the product description
Make sure not to use any h1 tags within the product description. The h1 tag is already used for the product headlines within default templates. Generally the h1 tag should only be set once per site.
Link: Onpage-Analyze Tools especially for online-shops based on shopware (supports shopware shopping worlds)
Link: Analyze html tags in your shop
Avoiding defective links
You should definitely pay attention to the accessibility of the links provided in your shop. This you can for example control with SEO tools.
Link: analyzes broken urls in the shop
Time to First Byte - Performance should be right
One of the most important criteria of a good search engine ranking is the performance. Shopware supports different cache-technologies by default: http-Cache, APCu, ZendOP. Combined with an efficient server, an ideal mysql- and php-configuration as well as an up-to-date php-Version the loading times of your site can be noticeably shortened.
Image sizes as well as external plugins, including additional css or java files, can also noticeably reduce the creation time of your site.
Useful tools: Link: Pagespeed testing with Google-bot
Link: Improve theloading times of your shop
Link: Webshop Monitoring Service
Link: Onpage-Analyzing tool for online-shops - TIP! Supports shopware shopping worlds and landing-pages!
SSL is required
Deliver your sites SSL-encrypted. A fitting and trustworthy certificate can be received for example from your host. If the certificate is established on your server you only have to activate the function Use SSL everywhere in the Shopware shop settings.
If you switch your site from http to https you have to correct your URL in your Google Webmaster tools account. By individual adjustments to the .htaccess file of Shopware, all direct requests can be forwarded from http to https.
Many hosting packages offer this option as standard so that this forwarding rule can also be stored in the server configuration. In this case no individual adjustments to the .ht access file are necessary.
Test your mobile sites after activating your own template or a Plugin
Generally the Shopware default responsive template has been optimized for mobile devices. If you use your own template or Plugin you should check those mobile templates after the installation to assure their functionality.
Freely adapted from the motto "Trust is good, control is better!" you should check the front page, the producer sites, the listings as well as the item detail pages.
Useful Tools:
SEO tools: Content or source code of the shopping world is not recognized
The content of a shopping world will appear in the developer consoles of the browsers only and not in the source view (e.g., view-source:)..
External crawlers or SEO tools only recognize the source code of a shopping world if these crawlers and tools will support boilerplate code. The Google bot supports this.
For example, the source code view of the browser or SEO tools from companies like Sistrix are not able to crawl boileplatecode. Their bots do not recognize html content embedded in shopping worlds because the tools are not able to identify ajax- embedded content. This is not a bug of shopware.
We recommend to use the Ryte SEO Analyze Tools which supports boilerplate code and shopware shopping worlds - other well-known tools unfortunately do not support this technique as yet.
You should not rely on the statements of such tools when capturing the source code of your shopping worlds. The tools capture only what they understand.
Optimize png- and jpg images
Shopware itself optimizes the images only in a rudimentary way.
If GooglePageSpeed complain about the compression level of your images, you must compress your images with tools like optpng or jpegoptiom via your own shellscripts once again and independently of Shopware.
In conjunction with own server cronjobs, the recompression can also be automated.
Link: Optimizing images with optipng
Link: Optimizing images with jpegoptim
Pushing articles via the internal linkjuice by using Shopware product streams
The number of incoming internal links can be affected by the Shopware Product Streams. If you want to increase the number of incoming links (Linkjuice) to one or more specific products, this is possible with a few kicks in the backend of Shopware. No additional plugins are required.
You just need to create a product stream including the articles you want to push. Assign this stream to a specific category, shopping world, or item detail page, if necessary.
Link: Pushing links with shopware product streams
Do not track the Paypal-Checkout-URL as a Referrer in Google Analytics
All orders created via Shopware's checkout using Paypal payment method (or others, e.g. Amazon or Klarna) will be identified by the Referrer-URL paypal.com.
You have to blacklist the URL paypal.com in your Google Analytics Administration under Tracking Information> Referrer Exclusion List. Otherwise the Referrer-URL paypal.com will get all conversions.
The SEO engine offers the possibility to rewrite search engine friendly URLs for your shop. For example -- would be replaced with.
This construction can be further customized with the SEO engine. For example, you can enter information such as manufacturer and/or item number in the link in order to have the desired URL structure.
Configuration options can be found under ‘’’Configuration/Basic settings/Frontend/SEO / router settings’’’
From Shopware 5.2.5 we implemented option "index paginated content" will set Page 1 to index,follow and all following pages to noindex,follow. Also canonical tags will only shown on shopping worlds without listing below (otherwise this will be Page1). When updating, this option is disabled.
When you clear the cache, only URLs of changing items will be regenerated when using the SEO engine. The date in the “Last update” field can be removed from the settings. Save the settings and clear the cache. All URLs will now be built and verified. Keep in mind that a maximum of 1000 new URLs can be created per request in the frontend.
If you use language shops, you can activate hreflang support after the update to version 5.5. In this case, the corresponding translations of the pages are output in their source code. This way search engines recognize that the language shop pages are translations and treat them accordingly. You can activate this feature via the folowing two options:
If the option Output href-lang in the meta tags: (1) is set active, all languages of the pages of your shop are output in the meta tags. Via the option Use in href-lang language and country: (2) you can choose if the country should be output in addition to the language, e.g. "en-EN" instead of just "en".
Building a template item
The following variables are available for the item template:
{$sArticle.id} {$sArticle.name} {$sArticle.ordernumber} {$sArticle.suppliernumber} {$sArticle.supplier} {$sArticle.date} {$sArticle.releasedate} {$sArticle.attr1} bis {$sArticle.attr20}
valid from Shopware 4.2
{$sArticle.metaTitle} {$sArticle.description} {$sArticle.keywords}
Building template categories
Free text fields from the categories (e.g., " {$sCategory.attribute. attribute1} ") can be used with Shopware 4.0.2 and up.
Examples for category variables:
{$sCategory.id} {$sCategory.path} {$sCategory.metaKeywords} {$sCategory.metaDescription} {$sCategory.cmsHeadline}
Examples for blog templates
{sCategoryPath categoryID=$blogArticle.categoryId} {$blogArticle.id} {$blogArticle.title} {$blogArticle.shortDescription} {$blogArticle.description}
Example supplier template
In the supplier seo template the only variable you can use is the supplier id via {$sSupplier.id}.
From Shopware 5.2.4
With Shopware 5.2.4 the SEO URL generation was outsourced in a framework, which was used also in the past, but is much more up-to-date and is very useful to generate international SEO URLs.
SEO Variables
Here we give you an overview of the variables which you can use for generating SEO URLs. These variables are arrays, so you first use the variable of the main item you want to use and switch via dot (.) to the next lower level. For a supplier of an article e.g. this would be {$sArticles.supplier}, because supplier is one level below sArticles.
Basically all listed variables are available for generating SEO URLs, but keep in mind, that the arrays can be different according how the item is build, e.g. if the article is a configurator item. Make sure, that your used variables are always available, otherwise your SEO URLs won't be generated correctly which might cause a problem regarding your SEO ranking!
To make sure, that your URLs will be generated right, you can work with if statements in the SEO URL rules to avoid empty variables, but also keep in mind for this, that the URLs should never change!
Items (Effective 5.2.6)
Array ( [id] => 49 [supplierID] => 2 [name] => DAYPACK [description] => SEO Description [description_long] => <p>My Description</p> [shippingtime] => [datum] => 2015-01-28 [active] => 1 [taxID] => 1 [pseudosales] => 0 [topseller] => 0 [metaTitle] => SEO Title [keywords] => Keywords [changetime] => 2015-01-28 10:12:12 [pricegroupID] => [pricegroupActive] => 0 [filtergroupID] => 5 [laststock] => 0 [crossbundlelook] => 0 [notification] => 1 [template] => [mode] => 0 [main_detail_id] => 264 [available_from] => [available_to] => [configurator_set_id] => [ordernumber] => SW10049 [suppliernumber] => [supplier] => LEKI [date] => 2015-01-28 [releasedate] => [changed] => 2015-01-28 10:12:12 [attr1] => [attr2] => [attr3] => [attr4] => [attr5] => [attr6] => [attr7] => [attr8] => [attr9] => [attr10] => [attr11] => [attr12] => [attr13] => [attr14] => [attr15] => [attr16] => [attr17] => [attr18] => [attr19] => [attr20] => )
Categories (Effective 5.2.6)
Array ( [id] => 5 [parentId] => 3 [streamId] => [name] => Mountain air & adventure [position] => 0 [metaTitle] => SEO Title [metaKeywords] => Keywords [metaDescription] => SEO Description [cmsHeadline] => [cmsText] => Description [active] => 1 [template] => [productBoxLayout] => minimal [blog] => [path] => |3| [external] => [hideFilter] => [hideTop] => [changed] => DateTime Object ( [date] => 2015-01-25 20:59:28.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [added] => DateTime Object ( [date] => 2015-01-25 20:59:28.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [mediaId] => [media] => [attribute] => Array ( [id] => 35 [categoryId] => 5 [attribute1] => [attribute2] => [attribute3] => [attribute4] => [attribute5] => [attribute6] => [attr1] => ) [childrenCount] => 2 [articleCount] => 97 )
Campaigns (Effective 5.2.6)
Array ( [id] => 3 [parentId] => [active] => 1 [name] => bree [userId] => 50 [position] => 1 [device] => 0,1,2,3,4 [fullscreen] => 0 [validFrom] => [isLandingPage] => 1 [seoTitle] => SEO Title [seoKeywords] => Keywords [seoDescription] => SEO Description [validTo] => [createDate] => DateTime Object ( [date] => 2015-02-24 09:19:51.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [modified] => DateTime Object ( [date] => 2016-08-31 15:57:22.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [rows] => 20 [cols] => 3 [cellSpacing] => 10 [cellHeight] => 185 [articleHeight] => 2 [showListing] => [templateId] => 1 [mode] => fluid [categories] => Array ( [0] => Array ( [id] => 7 [parentId] => 3 [streamId] => [name] => Craft & tradition [position] => 2 [metaTitle] => [metaKeywords] => [metaDescription] => [cmsHeadline] => [cmsText] => [active] => 1 [template] => [productBoxLayout] => image [blog] => [path] => |3| [external] => [hideFilter] => [hideTop] => [changed] => DateTime Object ( [date] => 2015-01-25 20:59:57.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [added] => DateTime Object ( [date] => 2015-01-25 20:59:57.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [mediaId] => ) ) )
Blog (Effective 5.2.6)
Array ( [id] => 2 [title] => On the tracks [authorId] => [active] => 1 [shortDescription] => [description] => <p>Description</p> [views] => 6 [displayDate] => DateTime Object ( [date] => 2015-03-18 09:30:00.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [categoryId] => 37 [template] => [metaKeyWords] => Keywords [metaDescription] => SEO Description [metaTitle] => SEO Title [tags] => Array ( ) [author] => [media] => Array ( ) [attribute] => Array ( [id] => 4 [blogId] => 2 [attribute1] => [attribute2] => [attribute3] => [attribute4] => [attribute5] => [attribute6] => [attr1] => ) [comments] => Array ( ) )
Forms (Effective 5.2.6)
Array ( [id] => 5 [name] => Contact [text] => <p>Write an E-Mail to us.</p> [email] => [email protected] [emailTemplate] => E-Mail-Template [emailSubject] => Contact form Shopware [text2] => <p>Your form was sent!</p> [ticketTypeid] => 1 [isocode] => en [metaTitle] => SEO Title [metaKeywords] => Keywords [metaDescription] => SEO Description [shopIds] => [attribute] => Array ( [id] => 1 [formId] => 5 [attr1] => ) )
Shop pages (Effective 5.2.6)
Array ( [id] => 2 [tpl1variable] => [tpl1path] => [tpl2variable] => [tpl2path] => [tpl3variable] => [tpl3path] => [description] => Help / Support [pageTitle] => [metaKeywords] => Keywords [metaDescription] => SEO Description [html] => Description [grouping] => Array ( [0] => gLeft ) [position] => 1 [link] => [target] => [shopIds] => [shops] => Array ( ) [changed] => Array ( [date] => 2016-08-29 15:10:42.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [children] => Array ( ) [parentId] => [parent] => [attributes] => Array ( [core] => Array ( [id] => [cmsStaticID] => [attr1] => ) ) )
Supplier (Effective 5.2.6)
Array ( [id] => 1 [name] => Amplid [description] => <p>Description</p> [metaTitle] => SEO Title [metaDescription] => SEO Description [metaKeywords] => Keywords [link] => [coverFile] => media/image/amplid_logo.jpg [attributes] => Array ( [core] => Array ( [id] => 1 [supplierID] => 1 [attr1] => ) ) ) | https://docs.shopware.com/en/shopware-5-en/settings/seo?category=shopware-5-en/settings | 2022-08-07T23:02:55 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.shopware.com |
Defer
N.B. Deferral is a powerful, complex feature that enables compelling workflows. We reserve the right to change the name and syntax in a future version of dbt to make the behavior clearer and more intuitive. For details, see dbt#2968.
Defer is a powerful feature that makes it possible to run a subset of models or tests in a sandbox environment, without having to first build their upstream parents. This can save time and computational resources when you want to test a small number of models in a large project.
Defer requires that a manifest from a previous dbt invocation be passed to the
--state flag or env var. Together with the
state: selection method, these features enable "Slim CI". Read more about state.
UsageUsage
$ dbt run --models [...] --defer --state path/to/artifacts$ dbt test --models [...] --defer --state path/to/artifacts
When the
--defer flag is provided, dbt will resolve
ref calls differently depending on two criteria:
- Is the referenced node included in the model selection criteria of the current run?
- Does the reference node exist as a database object in the current environment?
If the answer to both is no—a node is not included and it does not exist as a database object in the current environment—references to it will use the other namespace instead, provided by the state manifest.
Ephemeral models are never deferred, since they serve as "passthroughs" for other
ref calls.
When using defer, you may be selecting from production datasets, development datasets, or a mix of both. Note that this can yield unexpected results
- if you apply env-specific limits in dev but not prod, as you may end up selecting more data than you expect
- when executing tests that depend on multiple parents (e.g.
relationships), since you're testing "across" environments
Deferral requires both
--defer and
--state to be set, either by passing flags explicitly or by setting environment variables (
DBT_DEFER_TO_STATE and
DBT_ARTIFACT_STATE_PATH). If you use dbt Cloud, read about how to set up CI jobs.
ExampleExample
In my local development environment, I create all models in my target schema,
dev_alice. In production, the same models are created in a schema named
prod.
I access the dbt-generated artifacts (namely
manifest.json) from a production run, and copy them into a local directory called
prod-run-artifacts.
runrun
I've been working on
model_b:
selectid,count(*)from {{ ref('model_a') }}group by 1
I want to test my changes. Nothing exists in my development schema,
dev_alice.
- Standard run
- Deferred run
$ dbt run --models model_b
create or replace view dev_me.model_b as (selectid,count(*)from dev_alice.model_agroup by 1)
Unless I had previously run
model_a into this development environment,
dev_alice.model_a will not exist, thereby causing a database error.
testtest
I also have a
relationships test that establishes referential integrity between
model_a and
model_b:
version: 2models:- name: model_bcolumns:- name: idtests:- relationships:to: ref('model_a')field: id
(A bit silly, since all the data in
model_b had to come from
model_a, but suspend your disbelief.)
- Without defer
- With defer
dbt test --models model_b
select count(*) as validation_errorsfrom (select id as id from dev_alice.model_b) as childleft join (select id as id from dev_alice.model_a) as parent on parent.id = child.idwhere child.id is not nulland parent.id is null
The
relationships test requires both
model_a and
model_b. Because I did not build
model_a in my previous
dbt run,
dev_alice.model_a does not exist and this test query fails. | https://60ec94561e8ebb00082d81fa--docs-getdbt-com.netlify.app/reference/node-selection/defer/ | 2022-08-07T22:07:24 | CC-MAIN-2022-33 | 1659882570730.59 | [] | 60ec94561e8ebb00082d81fa--docs-getdbt-com.netlify.app |
Overview
This document will go over the main features of the website dashboard that you can find by accessing the websites via the organisation page. The website dashboard has a variety of tools for managing your website as well as building the content for those websites.
Dashboard Tour
The main navigation of the dashboard options takes place on the navigation bar on the left hand of the page. This is the quick rundown of the options included and what they do:
- Dashboard: This option will take you to the main dashboard page which will give you an overview of your website. This page is perfect if you just need a quick glance at the latest updates.
- Products: This option will bring you through to the product overview, this is where all your products can be found and managed. Information can be quickly imported from your inventory solution of choice, such as Bokun to speed up the process and then later edited directly from the CMS.
- Pages: This option brings you through to the pages manager. Here you will find the various kinds of pages that make up your site and here you will be able to add new pages, edit existing ones and delete them if required.
- Blogs: This option brings you to the central hub for managing your blog content. You will be able to add new blogs, edit existing blogs as well as delete blogs if you need to.
- Settings: This option brings you through to the settings page for your website. Here you are able to control some of the main settings for the sits such as languages, currencies and currently active modules that the site is using. | https://docs.getlocal.travel/concepts/website-dashboard | 2022-08-07T21:47:17 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.getlocal.travel |
Mint Account List ("Hash List").
The typical method to create the mint list is to a use a tool that finds all NFTs with a specific creator in the first position of the creators array. If your NFTs were minted with a candy machine this will be the candy machine creator id by default. If you have multiple candy machines that are part of the collection, you can create a separate mint list for each candy machine and combine them together to create a single mint list which you provide to the marketplace(s) you are listing with.
Candy Machine v2 has a separate
candy_machine_id found in your cache file which identifies the candy machine on-chain, and a
candy_machine_creator_id that it uses to sign the NFT by placing it as a verified creator in the creators array.
If you minted through the storefront or modified your NFTs to accommodate Phantom's collection ordering requirements, you might instead need to search by your creator wallet, if it is in the first position of the creators array.
Metaplex recommends only using tools that check for a verified creator, otherwise fake NFTs could end up on your list. The tools below are the ones we know of currently that check for a verified creator. If you have a tool that supports this, contact us, and we'll add it to the list.
The following third-party tools can be used to generate the mint list:
- Sol-NFT Tools A web-based tool that allows you to generate a mint list and has other useful features as well. Easiest to use for non-developers.
Usage: put in your creator or candy machine creator id into the "Get Mint IDs" tab.
- Magic Eden Hash List Finder A web-based tool that allows you to generate a mint list.
Usage: put in your verified candy machine creator id or verified first creator address.
- Metaboss A tool primarily targeted at developers which has a
snapshot mintscommand for getting mint lists and many other useful features.
Usage: see.
Metaplex does not maintain the tools listed above. If you have issues with them please create issues in the appropriate GitHub repository, or ask in a general channel on the Discord to get help from community members. | https://docs.metaplex.com/guides/mint-lists | 2022-08-07T23:12:48 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.metaplex.com |
Working with Sugar
Full Sugar Video Tutorial
Sugar contains a collection of commands for creating and managing Metaplex Candy Machines. The complete list of commands can be viewed by running:
sugar
This will display a list of commands and their short description:
USAGE:
sugar [OPTIONS] <SUBCOMMAND>
OPTIONS:
-h, --help Print help information
-l, --log-level <LOG_LEVEL> Log level: trace, debug, info, warn, error, off
-V, --version Print version information
SUBCOMMANDS:
create-config Interactive process to create the config file
deploy Deploy cache items into candy machine config on-chain
help Print this message or the help of the given subcommand(s)
launch Create a candy machine deployment from assets
mint Mint one NFT from candy machine
show Show the on-chain config of an existing candy machine
update Update the candy machine config on-chain
upload Upload assets to storage and creates the cache config
validate Validate JSON metadata files
verify Verify uploaded data
withdraw Withdraw funds from candy machine account closing it
To get more information about a particular command (e.g.,
deploy), use the
help command:
sugar help deploy
The list of options together with a short description will be displayed:
Deploy cache items into candy machine config on-chain
USAGE:
sugar deploy [OPTIONS]
OPTIONS:
-c, --config <CONFIG> Path to the config file, defaults to "config.json" [default:
config.json]
--cache <CACHE> Path to the cache file, defaults to "cache.json" [default:
cache.json]
-h, --help Print help information
-k, --keypair <KEYPAIR> Path to the keypair file, uses Sol config or defaults to
"~/.config/solana/id.json"
-l, --log-level <LOG_LEVEL> Log level: trace, debug, info, warn, error, off
-r, --rpc-url <RPC_URL> RPC Url
Note: This guide assumes that you set up your RPC url and a keypair using Solana CLI config, as described in the
Quick Startsection above. | https://docs.metaplex.com/tools/sugar/working-with-sugar | 2022-08-07T22:53:42 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['/assets/images/Sugar-Tutorial-5ce27c9247f9b8992a7ad544124a9686.gif#radius#shadow',
'Sugar Tutorial'], dtype=object) ] | docs.metaplex.com |
Restoring Windows instance from Safespring Backup to Safespring Compute¶
Criste TSM Bare Machine Recovery (TBMR) is a solution that is included in the Safespring Backup service. This document describes how to restore an instance from Safespring Backup using TBMR.
Prerequisites¶
- The machine we are going to restore must be protected with the Safespring Backup service
- Your quota must be set high enough in order to set up a new machine as the same as you will restore
- In order to make the restored machine you will need API access to Safesprings platform. Instructions for how to set that up can be found here.
Method¶
1. Rekey the node¶
Start with going to the backup portal and rekey the node that you want to restore. You need to do this to have a password to use to connect to the backup server:
Copy the new password and the node name to a notepad.
2. Create volumes¶
Now go to the compute portal and create volumes of the same size and type (fast/large) as the machine you will restore. Also create one volume for the root file system (C:). It is a good idea create volumes that are a couple of GBs bigger than the original volume just to be sure that the restored files will have enough space.
3. Start an instance¶
Now you go and start an instance which will use TBMR to restore the machine. Make sure to pick the TBMR image when you should Source:
4. Give the machine internet access¶
Put the machine in a network where it can have internet access (does not have to be in the same network as the instance you are restoring even if it does not really matter). When you pick flavor "m.small" will suffice even though a larger instance with more memory and vCPUs could make the restore go faster. It does not matter what security groups you choose we only will interact with this instance through the web console.
After you have launched the instance make sure to assign a floating IP to be able to reach the internet.
5. Boot up the instance¶
Click on the name of the newly created TBMR-instance and the click "Console"-tab at the top. You will see you newly created instance booting up. This takes a little while:
In the meantime, go to "Volumes" and click "Manage Attachments" in the drop-down menu of the volume you have created to restore the machines root filesystem (C:)
Repeat the process for all the other volumes if the instance has more than just C:
In the volume listing you should now see that the empty volume is attached to you TBMR-instance:
6. Established a connection to the backup server¶
Now go back the web-console for the TBMR-instance and you should be greeted with the following dialogue:
Click the "Start the manual Recovery wizard" and then click "Next". Pick "Restore from a Spectrum Protect node" and then "Next"
In the next dialogue about the certificate - just click "Next". Now it is time to fill in the server and node information to restore. Unfortunately copy and paste does not work in the web console so you will have to fill in everything by hand. In order to get keyboard focus in the web console you also need to click on the gray area around the actual console window which is somewhat unintuitive.
Server Address: tsm1.backup.sto2.safedc.net Port: 1600 (change from 1500) Node name: The node name that you saved in a notepad from step one. User Id: Leave blank Password: The password you created from step 1. Make sure you type it in correctly.
7. Create the needed partitions¶
If you have filled in the information correctly you will see that you successfully have established a connection to the backup server. If you have any typos you will see a red text telling you that you could not connect to the server. In that case click "Back" and ensure that you have gotten everything right.
When the fetching of configuration is done - click "Finish". Click "Next" in the next dialogue to create partitions and volumes.
In the volume and layout configuration make sure that you have enough space for the volumes. You can in this stage choose to Swap or Ignore the target volumes if TBMR does not get the correct layout automatically. You get these options by right-clicking on a disk or volume. Just make sure that you have all the space you need to restore the machine.
Click "Next" and then "Finish" when done. TBMR will now create the needed partitions. When that is done click "Close".
8. Start restoring files¶
Click "Next" to start restoring the actual files. Select all the file spaces you want to restore and click "Next".
The restore will now start. Depending on the size of the server this might take some time.
If the volume you are restoring is C: TBMR will also restore the system state:
When the restore is finished you will be greeted with this dialogue. Click "Finish".
9. Make the C: volume bootable¶
Click "Next" to make the C: volume bootable. In the next dialogue you just click "Next" again:
Click "Finish" to start the process and then "Close".
10. Install the needed drivers¶
Now it is time for TBMR to install the needed drivers. Click "Next" in the "Dissimilar Hardware" dialogue and the "Next" again. If the backup was made from an instance running in Safespring Compute you will get a message saying, "No new devices were found in your system". Click "Finish".
11. Detach the volume¶
Now you get back to the TBMR Recovery Environment. We should not reboot since we now will create a new instance which will be the actual restored instance. Head to "Volumes" in Safespring Compute and detach the volume from the TBMR instance and then go to instances and delete the TBMR-instance after the volume has been detached.
12. Make the restored C: drive bootable¶
Now it is time to make the volume for the restored C: drive bootable. In order to do this, you will need API access to Safesprings Compute setup correctly. Instruction for how to do that can be found here. Type these two commands. In the second command you should copy the ID of your volume from the first command:
$ openstack volume list +--------------------------------------+-------------------------+-----------+------+------------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-------------------------+-----------+------+------------------------------------------+ | 382c4764-e971-4cbb-a454-5eba1e17bcc7 | restored_c | available | 50 | | +--------------------------------------+-------------------------+-----------+------+------------------------------------------+ $ openstack volume set --bootable 382c4764-e971-4cbb-a454-5eba1e17bcc7
13. Move the instance¶
Now when the volume is made bootable you can go back to the Safespring Compute portal and create a new instance. The flavor should be the same as the original instance and should be put in the same network as the original instance. Also set the same security groups as the original instance. When you pick "Source" you should pick "Volume" in the dropdown and then the volume you have restored to and that you just made bootable.
Launch the instance and assign a floating IP to it.
14. Watch your instance booting up¶
Go the the web console again and watch your restored instance booting up. When you come to the login screen wait a bit and let the instance reboot by itself one more time. After the second reboot you can login to you instance again but a better option is to login to it with RDP.
15. Check the backup software¶
All that is left now is to make sure that the backup software runs again on the restored machine. Go to the backup portal and rekey the node once more like you did in step 1. Copy the key to a notepad and go back to you instance. If you use RDP to connect to the instance you will be able to copy-and-paste the secret key which will make this step less prone to errors.
Click the start menu icon and launch the "Backup-Archive Command Line" application:
16. Control the node name¶
When the command line client has started it will tell you its node name which should match the node name of the node you have restored from. Press enter when prompted for user id and then enter your newly generated password you got with the rekey operation in the former step.
17. Success¶
You have now successfully restored your Windows machine from backup. | https://docs.safespring.com/backup/howto/windows-restore/ | 2022-08-07T22:32:58 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['../../../images/restore-rekey.png', 'image'], dtype=object)
array(['../../../images/restore-create-volume.png', 'image'], dtype=object)
array(['../../../images/restore-launch-restore-instance.png', 'image'],
dtype=object)
array(['../../../images/restore-assign-floating-ip.png', 'image'],
dtype=object)
array(['../../../images/restore-web-console.png', 'image'], dtype=object)
array(['../../../images/restore-attach-c-volume.png', 'image'],
dtype=object)
array(['../../../images/restore-attached-c.png', 'image'], dtype=object)
array(['../../../images/restore-tbmr-start.png', 'image'], dtype=object)
array(['../../../images/restore-choose-node-recovery.png', 'image'],
dtype=object)
array(['../../../images/restore-fill-in-node-info.png', 'image'],
dtype=object)
array(['../../../images/restore-contacting-backup-server.png', 'image'],
dtype=object)
array(['../../../images/restore-volume-layout.png', 'image'], dtype=object)
array(['../../../images/restore-select-filespaces.png', 'image'],
dtype=object)
array(['../../../images/restore-restore-info-dialogue.png', 'image'],
dtype=object)
array(['../../../images/restore-system-state.png', 'image'], dtype=object)
array(['../../../images/restore-tbmr-files-restored.png', 'image'],
dtype=object)
array(['../../../images/restore-clone-settings.png', 'image'],
dtype=object)
array(['../../../images/restore-detach-volumes.png', 'image'],
dtype=object)
array(['../../../images/restore-pick-restored-volume.png', 'image'],
dtype=object)
array(['../../../images/restore-start-backup-archive-command-line.png',
'image'], dtype=object)
array(['../../../images/restore-enter-new-tsm-password.png', 'image'],
dtype=object) ] | docs.safespring.com |