content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
World¶ The world environment can emit light, ranging from a single solid color, to arbitrary textures. It works the same as the world material in Cycles. In Eevee, the world lighting contribution is first rendered and stored into smaller resolution textures before being applied to the objects. This makes the lighting less precise than Cycles. Voir aussi
https://docs.blender.org/manual/fr/dev/render/eevee/world.html
2019-03-18T15:40:02
CC-MAIN-2019-13
1552912201455.20
[]
docs.blender.org
Create a policy assignment to identify non-compliant resources. If you don't have an Azure subscription, create a free account before you begin. Create a policy assignment In this quickstart, you create a policy assignment and assign the Audit VMs that do not use managed disks policy definition.. Azure Policy comes with built-in policy definitions you can use. Many are available, such as: - Enforce tag and its value - Apply tag and its value - Require SQL Server version 12.0 For a partial list of available built-in policies, box must be checked when the policy or initiative includes a policy with the deployIfNotExists effect. As the policy used for this quickstart doesn't, leave it blank. For more information, see managed identities and how remediation security works. Click Assign. You’re now ready to identify non-compliant resources to understand the compliance state of your environment. Identify non-compliant resources Select Compliance in the left side of the page. Then locate the Audit VMs that do not use managed disks policy assignment you created. If there are any existing resources that aren't compliant with this new assignment, they appear under Non-compliant resources. When a condition is evaluated against your existing resources and found true, then those resources are marked as non-compliant with the policy.. Clean up resources To remove the assignment created, follow these steps: validates that all the resources in the scope are compliant and identifies which ones aren't. To learn more about assigning policies to validate that new resources are compliant, continue to the tutorial for: Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/azure/governance/policy/assign-policy-portal
2019-03-18T15:54:17
CC-MAIN-2019-13
1552912201455.20
[array(['media/assign-policy-portal/policy-compliance.png', 'Policy compliance'], dtype=object) ]
docs.microsoft.com
World Size & LOD settings¶ World Size¶ The following properties can be used in the voxel world details World Size category to control the world size: - Octree Depth: the depth of the octree of the voxel world - World Size: the size of the world, given by the octree depth: CHUNK_SIZE * 2^octree_depth However, these settings aren’t really precise. For a more precise control the Custom World Bounds property can be used. Warning Chunks with a high LOD might ignore the custom bounds. To fix this, you can add a check in your world generator to return 1 when the position is outside the bounds. Tip You can debug the world bounds by using the Show World Bounds and World Bounds Thickness properties in the Voxel/Debug category of the voxel world details, or by using the voxel.ShowWorldBounds command Performance considerations¶ You should use bounds as small as possible to improve the generation speed. If for instance you have a world size of 4096, 4096 x 4096 x 4096 voxels will be generated. However if your world doesn’t go deeper than -256 voxels and higher than 512, not all those voxels are needed: using custom bounds like (-2048, -2048, -256), (2048, 2048, 512) will improve performance a lot for the same visual results. LOD Settings¶ The LODs are determined by the Voxel Invokers Components in the scene. Usually you only need a single voxel invoker on your character. The following properties can be used in the voxel world details LOD Settings category to control the LODs: - LOD Limit: the chunks can’t have a LOD higher than this. Useful if you don’t want to have a very low poly look when you’re far from your world. If you don’t want any LOD, you can set it to 0. - LOD to Min Distance: If LODToMinDistance[L] = D, then all the chunks under a distance D from a voxel invoker (in world space) will have a LOD inferior or equal to L. Warning Be careful when changing the LOD settings, as you can easily freeze UE & use all the available RAM with too high settings (eg LOD Limit of 0 on a big world).
https://voxel-plugin.readthedocs.io/en/latest/docs/worldsize_lod_settings.html
2019-03-18T16:24:55
CC-MAIN-2019-13
1552912201455.20
[]
voxel-plugin.readthedocs.io
Message-ID: <293162221.82524.1397635759413.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_82523_984509708.1397635759413" ------=_Part_82523_984509708.1397635759413 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Groovy is a powerful tool. Like other powerful tools (think of a= chainsaw) it requires a certain amount of user expertise and attention. Ot= herwise, results may be fatal.=20 Following code fragments are allowed in Groovy but usually result in uni= ntended behaviour or incomprehensible problems. Simply don't do this and sp= are yourself some of the frustration that new and unskilled Groovy programm= ers sometimes experience.=20 1. Accessing an object's type like a property=20 Using .class instead of .getClass() is ok - as= long as you know exactly what kind of object you have. But then you don't = need that at all. Otherwise, you run in the risk of getting null or something else, but not the class of the object. a =3D [:] println a.class.simpleName // NullPointerException, because a.class is n= ull.=20 2. Omitting parentheses around method arguments when not appropr= iate=20 Better use parentheses for argument lists unless you are sure. For= instance, a left parenthesis after a method name must always enclose the p= arameter list. println a*(b+c) // is OK println (a+b)*c // NullPointer exception, because println (a+b) results= in null.=20 3. Putting a newline at the wrong place into the code= p>=20 This may cause the compiler to to assume a complete statement although i= t is meant as continuing in the next line. It is not always as obvious as i= n the following example.=20 myVariable =3D "This is a very long statement continuing in the next l= ine. Result is=3D" + 42 // this line has no effect=20 4. Forgetting to write the second equals sign of the equals oper= ator=20 As a result, a comparison expression turns into an assignment.=20 while (a=3Db) { ... } // will be executed as long as b is = true (Groovy-truth) String s =3D "A"; boolean b =3D s =3D "B" // b becomes= true=20 (One measure is to put constant expressions in comparisons always before= the equals operator.)=20 5. Specifying the wrong return type when overriding a method (Groovy version < 1.5.1 only)=20 Such a method may be considered as overloaded (a different method with t= he same name) and not be called as expected.=20 def toString() { ... } // does not override toString() inherited from Obje= ct=20 6. Disregarding other objects' privacy=20 When accessing methods, fields, or properties of other classes, make sur= e that you do not interfere with private or protected members. Currently Gr= oovy doesn't distinguish properly between public, private, and protected me= mbers, so watch out yourself.=20 z =3D [1,2,3] z.size =3D 10 // size is private; errors when trying to access z[3] to z[9= ], although z.size()=3D=3D10 'more'.value =3D 'less' // results in "more"=3D=3D"less&= quot;=20 7. Thoughtless dynamic programming=20 Check for conflicts with intended class functionality before adding or c= hanging methods or properties using Groovy's dynamic facilities.=20 String.metaClass.class =3D Integer // results in 'abc'.getClass()=3D=3Djav= a.lang.Integer=20 8. String concatenation As in Java, you can conca= tenate Strings with the "+" symbol. But Java only needs that one = of the two items of a "+" expression to be a String, no matter if= it's in the first place or in the last one. Java will use the toStri= ng() method in the non-String object of your "+" expressio= n. But in Groovy, you just should be safe the first item of your "+&qu= ot; expression implements the plus() method in the right way, = because Groovy will search and use it. In Groovy GDK, only the Number and S= tring/StringBuffer/Character classes have the plus() method im= plemented to concatenate strings. To avoid surprises, always use GStrings.<= /p>=20 // Java code, it works boolean v =3D true; System.out.println(" foo "+v); System.out.println(v+" foo ");=20 // Groovy code boolean v =3D true println " foo "+v // It works println v+" foo " // It fails with MissingMethodException: No s= ignature of method: java.lang.Boolean.plus()=20
http://docs.codehaus.org/exportword?pageId=46956559
2014-04-16T08:09:19
CC-MAIN-2014-15
1397609521558.37
[]
docs.codehaus.org
You can manage survey data on the Survey tab of the Task Pane AutoCAD Map 3D allows you to manage survey point data. You can do the following with survey data: For example, if each survey point represents a telephone pole, you can export the points to an SDF file called Telephone_poles.sdf. You can then add Telephone_poles.sdf to your map using Data Connect and work with the point data as geospatial features. Survey data is kept in a dedicated SDF data store. You can add new properties and classes to the survey data store schema, but be careful not to alter or remove the existing properties and classes. Points in a survey data store are in read-only mode until you click Edit at the top of the Task Pane. Clicking Edit puts AutoCAD Map 3D into direct edit mode, which means that any changes you make to the points in AutoCAD Map 3D are immediately applied to the data store. You can reorganize survey points without entering Edit mode (for example, you can move points between point groups). When working with survey data, you must work online. If you work offline, AutoCAD Map 3D disconnects from the survey data store, and the survey tree disappears. Use the following methods to work with survey data.
http://docs.autodesk.com/CIV3D/2013/ENU/filesMAPC3D/GUID-FB3F384B-86E3-4F8A-B2BA-C78A2F0AEEC1.htm
2014-04-16T07:15:03
CC-MAIN-2014-15
1397609521558.37
[]
docs.autodesk.com
View tree Close tree | Preferences | | Feedback | Legislature home | Table of contents Search Previous file: ARTICLE IX. X ARTICLE X. EDUCATION X,1 Superintendent of public instruction. Section 1. [ As amended Nov. 1902 and Nov. 1982 ]. [ ] The clear proceeds of fines imposed, at least 50% under s. 59.20 (8) [now s. 59.25 (3) (j)] after the accused forfeits a deposit by nonappearance must be sent to the state treasurer for the school fund. 58 Atty. Gen. 142. Money resulting from state forfeitures action under ss. 161.555 [now s. 961.555] and 973.075 (4) must be deposited in the school fund. Money granted to the state after a federal forfeiture proceeding need not be. 76 Atty. Gen. 209 . X,3. [ 1969 J.R. 37, 1971 J.R. 28, vote April 1972 ] The constitution does not require that school districts be uniform in size or equalized valuation. Larson v. State Appeal Board 56 Wis. 2d 823, 202 N.W.2d 920. Public schools may sell or charge fees for the use of books and items of a similar nature when authorized by statute without violating this section. Board of Education v. Sinclair, 65 Wis. 2d 179, 222 N.W.2d 143. Use of the word "shall" in s. 118.155, making cooperation by school boards with programs of religious instruction during released time mandatory rather than discretionary does not infringe upon the inherent powers of a school board. State ex rel. Holt v. Thompson, 66 Wis. 2d 659, 225 N.W.2d 678. School districts are not constitutionally compelled to admit gifted four-year old children into kindergarten. Zweifel v. Joint Dist., No. 1, Belleville, 76 Wis. 2d 648, 251 N.W.2d 822. The mere appropriation of public monies to a private school does not transform that school into a district school under this section. Jackson v. Benson, 218 Wis. 2d 835, 578 N.W.2d 602 (1998), 97-0270 . The school finance system under ch. 121 is constitutional under both art. I, sec. 1 and art. X, sec. 3. Students have a fundamental right to an equal opportunity for a sound basic education. Uniform revenue-raising capacity among districts is not required. Vincent v. Voight, 2000 WI 93, 236 Wis. 2d 588, 614 N.W.2d 388, 97-3174 . . The state and its agencies, except the department of public instruction, constitutionally can deny service or require the payment of fees for services to children between age 4 and 20 who seek admission to an institution or program because school services are lacking in their community or district. 58 Atty. Gen. 53. VTAE schools [now technical colleges] are not "district schools" within the meaning of this section. 64 Atty. Gen. 24. Public school districts may not charge students for the cost of driver education programs if the programs are credited towards graduation. 71 Atty. Gen. 209 . Having established the right to an education, the state may not withdraw the right on grounds of misconduct absent fundamentally fair procedures to determine if misconduct occurred. Attendance by the student at expulsion deliberations is not mandatory; all that is required is the student have the opportunity to attend and present his or her case. Remer v. Burlington Area School District, 149 F. Supp. 2d 665 (2001). Intrastate inequalities in public education; the case for judicial relief under the equal protection clause. Silard, White, 1970 WLR 7. The constitutional mandate for free schools. 1971 WLR 971. X,4 Annual school tax. Section 4. Each town and city shall be required to raise by tax, annually, for the support of common schools therein, a sum not less than one-half the amount received by such town or city respectively for school purposes from the income of the school fund. X,5 Income of school fund. Section 5.. X,6 State university; support. Section 6.. Vocational education is not exclusively a state function. West Milwaukee v. Area Board of Vocational, Technical and Adult Education, 51 Wis. 2d 356, 187 N.W.2d 387. X,7 Commissioners of public lands. Section 7.. X,8 Sale of public lands. Section 8.. The legislature may direct public land commissioners to invest monies from the sale of public lands in student loans but may not direct a specific investment. 65 Atty. Gen. 28. State reservation of land and interests in lands under ch. 452, laws of 1911, 24.11 (3) and Art. X, sec. 8 is discussed. 65 Atty. Gen. 207. Next file: ARTICLE XI. /2007/related/wiscon/_19 false wisconsinconstitution /2007/related/wiscon/.
http://docs.legis.wi.gov/2007/related/wiscon/_19
2014-04-16T07:15:51
CC-MAIN-2014-15
1397609521558.37
[]
docs.legis.wi.gov
numpy.polynomial.polynomial.polyvander2d¶ - numpy.polynomial.polynomial.poly powers of x and y. If V = polyvander2d(x, y, [xdeg, ydeg]), then the columns of V correspond to the elements of a 2-D coefficient array c of shape (xdeg + 1, ydeg + 1) in the order and np.dot(V, c.flat) and polyval2d(x, y, c) will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D polynomials of the same degrees and sample points. See also polyvander, polyvander3d., polyval3d
http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.polynomial.polynomial.polyvander2d.html
2014-04-16T07:28:35
CC-MAIN-2014-15
1397609521558.37
[]
docs.scipy.org
I (JulienV 13:35, 2 November 2011 (CDT))
http://docs.joomla.org/index.php?title=Talk:Adapting_a_Joomla_1.5_extension_to_Joomla_1.6&diff=62809&oldid=60530
2014-04-16T08:49:46
CC-MAIN-2014-15
1397609521558.37
[]
docs.joomla.org
Converter Nodes¶ As the name implies, these nodes convert the colors or other properties of various data (e.g. transparency) in some way. They also split out or re-combine the different color channels that make up an image, allowing you to work on each channel independently. Various color channel arrangements are supported, including traditional RGB, HSV and HDMI formats.
https://docs.blender.org/manual/es/latest/compositing/types/converter/index.html
2021-01-15T18:13:11
CC-MAIN-2021-04
1610703495936.3
[]
docs.blender.org
Can I Cancel My Subscription (Auto-Renewal)? Yes, you can cancel your subscription (auto-renewal) from your account at any given moment. You are not obligated to renew your license if you do not want to. Please note that if you cancel your subscription you may lose your automatic renewal price lock, therefore we advise you to wait until your license is set to expire and you have reached a decision whether or not you want to renew your license.
https://docs.oceanwp.org/article/703-can-i-cancel-subscription-auto-renewal
2021-01-15T18:43:55
CC-MAIN-2021-04
1610703495936.3
[]
docs.oceanwp.org
Saving a Scene in Paint It is important to regularly save your scene. As you make changes to a scene, an asterisk (*) appears in the title bar beside the scene name to indicate that the scene contains unsaved changes. In Paint, you can only make changes to drawings in a scene, not to the scene's structure or set-up. When you save in Paint, all the changes you made to the loaded drawings are saved to the database. Do one of the following - In the top menu, select File > Save. - Press Ctrl + S (Windows/Linux) or ⌘ + S (macOS).
https://docs.toonboom.com/help/harmony-17/paint/project-creation/save-scene-paint.html
2021-01-15T17:29:45
CC-MAIN-2021-04
1610703495936.3
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Date: Fri, 15 Jan 2021 18:54:49 +0000 (GMT) Message-ID: <1844662430.61656.1610736889531@df68ed866f50> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_61655_505197961.1610736889530" ------=_Part_61655_505197961.1610736889530 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: You can manage access to a flow for other users through the= Share Flow dialog. In Flow View, select=20 Share from the context menu.You can manage access to a flow for other users through the= Share Flow dialog. In Flow View, select=20 Share from the context menu. When you grant another user access to one of your flows, you both can wo= rk on the objects of the flow. You can take turns editing the recipes, whic= h mor= e collaborators to the flow, so that you may work together on the same obje= cts. =20 Figure: Manage Access T= ab To add users as collaborators in your f= low, start typing the email address of a user with whom you'd like to colla= borate. Repeat this process to add multiple users. NOTE: Each entry must be a valid email address of a use= r who has access to your project. In the above image, valid email addresses= are obscured for security purposes. To save your changes, click Save. Each selected user now can access the flow through their flows page. See= Flows Page. NOTE: Collaborators have a reduced set of permissions o= n the flow. For example, they cannot edit the flow name or description or d= elete it. See Overview of Shari= ng.
https://docs.trifacta.com/exportword?pageId=160412444
2021-01-15T18:54:49
CC-MAIN-2021-04
1610703495936.3
[]
docs.trifacta.com
# Creating and configuring the Zowe instance directory The Zowe instance directory or <INSTANCE_DIRECTORY> contains configuration data required to launch a Zowe runtime. This includes port numbers, location of dependent runtimes such as Java, Node, z/OSMF, as well as log files. When Zowe is started, configuration data will be read from files in the instance directory and logs will be written to files in the instance directory. Note: The creation of an instance directory will set default values for users who want to run all Zowe z/OS components. If you are using Docker, you must make a small configuration change to disable the components on z/OS that will instead run in Docker. The instance directory <INSTANCE_DIRECTORY>/bin contains a number of key scripts zowe-start.shis used to start the Zowe runtime by launching the ZWESVSTCstarted task. zowe-stop.shis used to stop the Zowe runtime by terminating the ZWESVSTCstarted task. zowe-support.shcan be used to capture diagnostics around the Zowe runtime for troubleshooting and off-line problem determination, see Capturing diagnostics to assist problem determination. # Prerequisites Before creating an instance directory, ensure that you have created a keystore directory that contains the Zowe certificate. For more information about how to create a keystore directory, see Creating Zowe certificates. Also, ensure that you have already configured the z/OS environment. For information about how to configure the z/OS environment, see Configuring the z/OS system for Zowe. # Creating an instance directory To create an instance directory, use the zowe-configure-instance.sh script. Navigate to the Zowe runtime directory <RUNTIME_DIR> and execute the following commands: <RUNTIME_DIR>/bin/zowe-configure-instance.sh -c <PATH_TO_INSTANCE_DIR> Multiple instance directories can be created and used to launch independent Zowe runtimes from the same Zowe runtime directory. The Zowe instance directory contains a file instance.env that stores configuration data. The data is read each time Zowe is started. The purpose of the instance directory is to hold information in the z/OS File System (zFS) that is created (such as log files) or modified (such as preferences) or configured (such as port numbers) away from the zFS runtime directory for Zowe. This allows the runtime directory to be read-only and to be replaced when a new Zowe release is installed, with customizations being preserved in the instance directory. If you have an instance directory that is created from a previous release of Zowe 1.8 or later and are installing a newer release of Zowe, then you should run zowe-configure-instance.sh -c <PATH_TO_INSTANCE_DIR> pointing to the existing instance directory to have it updated with any new values. The release documentation for each new release will specify when this is required, and the file manifest.json within each instance directory contains information for which Zowe release it was created from. In order to allow the ZWESVSTC started task to have permission to acces the contents of the <INSTANCE_DIR> the zowe-configure-instance.sh script sets the group ownership of the top level directory and its child to be ZWEADMIN. If a different group is used for the ZWESVSTC started task you can specify this with the optional -g argument, for example. <RUNTIME_DIR>/bin/zowe-configure-instance.sh -c <PATH_TO_INSTANCE_DIR> -g <GROUP> # Reviewing the instance.env file To operate Zowe, a number of zFS folders need to be located for prerequisites on the platform. Default values are selected when you run zowe-configure-instance.sh. You might want to modify the values. # Component groups LAUNCH_COMPONENT_GROUPS: This is a comma-separated list of which z/OS microservice groups are started when Zowe launches. GATEWAYwill start the API mediation layer that includes the API catalog, the API gateway, and the API discovery service. These three address spaces are Apache Tomcat servers and use the version of Java on z/OS as determined by the JAVA_HOMEvalue. In addition to the mediation layer, the z/OS Explorer services are included here as well. DESKTOPwill start the Zowe desktop that is the browser GUI for hosting Zowe applications such as the 3270 Terminal emulator or the File Explorer. It will also start ZSS. The Zowe desktop is a node application and uses the version specified by the NODE_HOMEvalue. ZSSwill start the ZSS server without including the Desktop and Application Framework server. This can be used with Docker so that you do not run servers on z/OS that will already be running within Docker. This may also be useful if you want to utilize ZSS core features or plug-ins without needing the Desktop. ZSS is a pre-requisite for the Zowe desktop, so when the DESKTOPgroup is specified then the zss server will be implicitly started. For more information on the zssServer and the technology stack of the Zowe servers see Zowe architecture. - Vendor products may extend Zowe with their own component group that they want to be lifecycled by the Zowe ZWESVSTCstarted task and run as a Zowe sub address space. To do this, specify the fully qualified directory provided by the vendor that contains their Zowe extension scripts. This directory will contain a start.shscript (required) that is called when the ZWESVSTCstarted task is launched, a configure.shscript (optional) that performs any configuration steps such as adding iFrame plug-ins to the Zowe desktop, and a validate.shscript (optional) that can be used to perform any pre-launch validation such as checking system prerequisites. For more information about how a vendor can extend Zowe with a sub address space, see the Extending section. Note: If you are using Docker, it is recommended to remove GATEWAY and DESKTOP from LAUNCH_COMPONENT_GROUPS by setting LAUNCH_COMPONENT_GROUPS=ZSS. This will prevent duplication of servers running both in Docker and on z/OS. Technical Preview # Component prerequisites JAVA_HOME: The path where 64-bit Java 8 or later is installed. Only needs to be specified if not already set as a shell variable. Defaults to /usr/lpp/java/J8.0_64. NODE_HOME: The path to the Node.js runtime. Only needs to be specified if not already set as a shell variable. SKIP_NODE: When Zowe starts, it checks whether the NODE_HOMEpath is a valid node runtime. If not, it will prompt for the location of where node can be located. Specify a value of 1to bypass this step, or 0for the check to occur. This may be useful in an automation scenario where the zowe-start.shscript is run unattended and the makeup of the components being launched does not require a node runtime. ROOT_DIR: The directory where the Zowe runtime is located, also referred to as the <RUNTIME_DIR>. Defaults to the location of where zowe-configure-instancewas executed. ZOSMF_PORT: The port used by z/OSMF REST services. Defaults to value determined through running netstat. ZOSMF_HOST: The host name of the z/OSMF REST API services. ZOWE_EXPLORER_HOST: The hostname of where the Explorer servers are launched from. Defaults to running hostname -c. Ensure that this host name is externally accessible from clients who want to use Zowe as well as internally accessible from z/OS itself. ZOWE_IP_ADDRESS: The IP address of your z/OS system which must be externally accessible to clients who want to use Zowe. This is important to verify for IBM Z Development & Test Environment and cloud systems, where the default that is determined through running pingand digon z/OS returns a different IP address from the external address. APIML_ENABLE_SSO: Define whether single sign-on should be enabled. Use a value of trueor false. Defaults to false. # Keystore configuration KEYSTORE_DIRECTORY: This is a path to a zFS directory containing the certificate that Zowe uses to identify itself and encrypt https:// traffic to its clients accessing REST APIs or web pages. This also contains a truststore used to hold the public keys of any z/OS services that Zowe is communicating to, such as z/OSMF. The keystore directory must be created the first time Zowe is installed onto a z/OS system and it can be shared between different Zowe runtimes. For more information about how to create a keystore directory, see Configuring Zowe certificates. # Address space names Individual address spaces for different Zowe instances and their subcomponents can be distinguished from each other in RMF records or SDSF views by specifying how they are named. Address space names are 8 characters long and made up of a prefix ZOWE_PREFIX, instance ZOWE_INSTANCE followed by an identifier for each subcomponent. ZOWE_PREFIX: This defines a prefix for Zowe address space STC names. Defaults to ZWE. ZOWE_INSTANCE: This is appended to the ZOWE_PREFIXto build up the address space name. Defaults to 1 A subcomponent will be one of the following values: - AC - API ML Catalog - AD - API ML Discovery Service - AG - API ML Gateway - DS - App Server - EF - Explorer API Data Sets - EJ - Explorer API Jobs - SZ - ZSS Server - UD - Explorer UI Data Sets - UJ - Explorer UI Jobs - UU - Explorer UI USS The STC name of the main started task is ZOWE_PREFIX+ ZOWE_INSTANCE+ SV. Example: ZOWE_PREFIX=ZWE ZOWE_INSTANCE=X the first instance of Zowe API ML Gateway identifier will be as follows: ZWEXAG Note: If the address space names are not assigned correctly for each subcomponents, check that the step Configure address space job naming has been performed correctly for the z/OS user ID ZWESVUSR. # Ports When Zowe starts, a number of its microservices need to be given port numbers that they can use to allow access to their services. You can leave default values for components that are not in use. The two most important port numbers are the GATEWAY_PORT which is for access to the API gateway through which REST APIs can be viewed and accessed, and ZOWE_ZLUX_SERVER_HTTPS_PORT which is used to deliver content to client web browsers logging in to the Zowe desktop. All of the other ports are not typically used by clients and used for intra-service communication by Zowe. CATALOG_PORT: The port the API catalog service will use. Used when LAUNCH_COMPONENT_GROUPS includes GATEWAY. DISCOVERY_PORT: The port the discovery service will use. Used when LAUNCH_COMPONENT_GROUPS includes GATEWAY. GATEWAY_PORT: The port the API gateway service will use. Used when LAUNCH_COMPONENT_GROUPS includes GATEWAY. This port is used by REST API clients to access z/OS services through the API mediation layer, so should be accessible to these clients. This is also the port used to log on to the API catalog web page through a browser. JOBS_API_PORT: The port the jobs API service will use. Used when LAUNCH_COMPONENT_GROUPS includes GATEWAY. FILES_API_PORT: The port the files API service will use. Used when LAUNCH_COMPONENT_GROUPS includes GATEWAY. JES_EXPLORER_UI_PORT: The port the jes-explorer UI service will use. Used when LAUNCH_COMPONENT_GROUPS includes GATEWAY. MVS_EXPLORER_UI_PORT: The port the mvs-explorer UI service will use. Used when LAUNCH_COMPONENT_GROUPS includes GATEWAY. USS_EXPLORER_UI_PORT: The port the uss-explorer UI service will use. Used when LAUNCH_COMPONENT_GROUPS includes GATEWAY. ZOWE_ZLUX_SERVER_HTTPS_PORT: The port used by the Zowe desktop. Used when LAUNCH_COMPONENT_GROUPS includes DESKTOP. It should be accessible to client machines with browsers wanting to log on to the Zowe desktop. ZOWE_ZSS_SERVER_PORT: This port is used by the ZSS server. Used when LAUNCH_COMPONENT_GROUPS includes DESKTOP or ZSS. Note: If all of the default port values are acceptable, the ports do not need to be changed. To allocate ports for the Zowe runtime servers, ensure that the ports are not in use. To determine which ports are not available, follow these steps: Display a list of ports that are in use with the following command: TSO NETSTAT Display a list of reserved ports with the following command: TSO NETSTAT PORTLIST # Terminal ports Note: Unlike the ports needed by the Zowe runtime for its Zowe Application Framework and z/OS Services which must be unused, the terminal ports are expected to be in use. ZOWE_ZLUX_SSH_PORT: The Zowe desktop contains an application VT Terminal which opens a terminal to z/OS inside the Zowe desktop web page. This port is the number used by the z/OS SSH service and defaults to 22. The USS command netstat -b | grep SSHD1can be used to display the SSH port used on a z/OS system. ZOWE_ZLUX_TELNET_PORT: The Zowe desktop contains an application 3270 Terminal which opens a 3270 emulator inside the Zowe desktop web page. This port is the number used by the z/OS telnet service and defaults to 23. The USS command netstat -b | grep TN3270can be used to display the telnet port used on a z/OS system. ZOWE_ZLUX_SECURITY_TYPE: The 3270 Terminal application needs to know whether the telnet service is using tlsor telnetfor security. The default value is blank for telnet. # Gateway configuration APIML_ALLOW_ENCODED_SLASHES: When this parameter is set to true, the Gateway allows encoded characters to be part of URL requests redirected through the Gateway. APIML_CORS_ENABLED: When this parameter is set to true, CORS are enabled in the API Gateway for Gateway routes api/v1/gateway/**. APIML_PREFER_IP_ADDRESS: Set the value of the parameter to trueif you want to advertise a service IP address instead of its hostname. APIML_GATEWAY_TIMEOUT_MILLIS: Timeout for connection to the services. APIML_SECURITY_ZOSMF_APPLID: The z/OSMF APPLID used for PassTicket. APIML_SECURITY_AUTH_PROVIDER: The authentication provider used by the API Gateway. By default, the API Gateway uses z/OSMF as an authentication provider, but it is possible to switch to SAF as the authentication provider instead of z/OSMF. APIML_DEBUG_MODE_ENABLED: When this parameter is set to true, detailed logging of activity by the API mediation layer occurs. This can be useful to diagnose unexpected behavior of the API gateway, API discovery, or API catalog services. Default value is false. Refer to detailed section about API Gateway configuration # Cross memory server ZOWE_ZSS_XMEM_SERVER_NAME: For the Zowe Desktop to operate communication with the Zowe cross memory server. The default procedure name ZWESIS_STDis used for the cross memory server. However, this can be changed in the ZWESISTCPROBLIC member. This might occur to match local naming standards, or to allow isolated testing of a new version of the cross memory server while an older version is running concurrently. The Zowe desktop that runs under the ZWESVSTCstarted task will locate the appropriate cross memory server running under its started task ZWESISTCusing the ZOWE_ZSS_XMEM_SERVER_NAMEvalue. If this handshake cannot occur, users will be unable to log in to the Zowe desktop. See Troubleshooting: ZSS server unable to communicate with X-MEM. //ZWESISTC PROC NAME='ZWESIS_STD',MEM=00,RGN=0M # Extensions ZWEAD_EXTERNAL_STATIC_DEF_DIRECTORIES: Full USS path to the directory that contains static API Mediation Layer .yml definition files. For more information, see Onboard a REST API without code changes required. Multiple paths should be semicolon separated. This value allows a Zowe instance to be configured so that the API Mediation Layer can be extended by third party REST API and web UI servers. EXTERNAL_COMPONENTS: For third-party extenders to add the full path to the directory that contains their component lifecycle scripts. For more information, see Zowe lifecycle - Zowe extensions. # High Availability The high availability (HA) feature of Zowe is under development and has not been fully delivered. The following values are work in progress towards HA capability. They are not used and will be documented in more detail once HA support is finalized in a future Zowe release. ZWE_DISCOVERY_SERVICES_LIST: (Work in progress) Do not modify this value from its supplied default of{ZOWE_EXPLORER_HOST}:${DISCOVERY_PORT}/eureka/. ZWE_CACHING_SERVICE_PORT=7555: (Work in progress) This port is not yet used so the value does not need to be availale. ZWE_CACHING_SERVICE_PERSISTENT=VSAM: (Work in progress) ZWE_CACHING_SERVICE_VSAM_DATASET: (Work in progress) # Configuring a Zowe instance via instance.env file When configuring a Zowe instance through the instance.env file, ZOWE_IP_ADDRESS and ZOWE_EXPLORER_HOST are used to specify where the Zowe servers can be reached. However, these values may not reflect the website name that you access Zowe from. This is especially true in the following cases: - You are using a proxy - The URL is a derivative of the value of ZOWE_EXPLORER_HOST, such as myhostversus myhost.mycompany.com In these cases, it may be necessary to specify a value for ZWE_EXTERNAL_HOSTS in the form of a comma-separated list of the addresses from which you want to access Zowe in your browser. In the previous example, ZWE_EXTERNAL_HOSTS could include both myhost and myhost.mycompany.com. In the instance.env, this would look like: ZWE_EXTERNAL_HOSTS=myhost,myhost.mycompany.com This configuration value maybe used for multiple purposes, including referrer-based security checks. In the case that the values are not specified, referrer checks will use the default values of ZOWE_IP_ADDRESS, ZOWE_EXPLORER_HOST, and the system's hostname. Therefore, if these values are not what you put into your browser, you will want to specify ZWE_EXTERNAL_HOSTS to set the correct value. ZOWE_EXPLORER_FRAME_ANCESTORS: The MVS, USS, and JES Explorer are served by their respective explorer UI address spaces. These are accessed through the Zowe desktop where they are hosted as iFrames. To protect against double iFrame security vulnerabilities, browsers all of the valid address that may be used by the browser must be explicitly declared in this property. The default values are: "${ZOWE_EXPLORER_HOST}:*,${ZOWE_IP_ADDRESS}:*". If there are any other URLs by which the Zowe Explorers can be served, then these should be appended to the preceding comma-separated list. # Hints and tips Learn about some hints and tips that you might find useful when you create and configure the Zowe instance. When you are configuring Zowe on z/OS, you need to create certificates, and then create the Zowe instance. The creation of a Zowe instance is controlled by the instance.env file in your instance directory INSTANCE_DIR. Keystore Edit the instance.envfile to set the keystore directory to the one you created when you ran zowe-setup-certificates.sh. The keyword and value in instance.envshould be the same as in zowe-setup-certificates.env, as shown below KEYSTORE_DIRECTORY=/my/zowe/instance/keystore Hostname and IP address The zowe-configure-instance.shscript handles the IP address and hostname the same way zowe-setup-certificates.shdoes. In instance.env, you specify the IP address and hostname using the following keywords: ZOWE_EXPLORER_HOST= ZOWE_IP_ADDRESS= The ZOWE_EXPLORER_HOSTvalue must resolve to the external IP address, otherwise you should use the external IP address as the value for ZOWE_EXPLORER_HOST. The zowe-configure-instance.shscript will attempt to discover the IP address and hostname of your system if you leave these unset. When the script cannot determine the hostname or the IP address, it will ask you to enter the IP address manually during the dialog. If you have not specified a value for ZOWE_EXPLORER_HOST, then the script will use the IP address as the hostname. The values of ZOWE_EXPLORER_HOSTand ZOWE_IP_ADDRESSthat the script discovered are appended to the instance.envfile unless they were already set in that file or as shell environment variables before you ran the script.
https://docs.zowe.org/stable/user-guide/configure-instance-directory.html
2021-01-15T17:55:52
CC-MAIN-2021-04
1610703495936.3
[]
docs.zowe.org
starspot¶ starspot is a tool for measuring stellar rotation periods using Lomb-Scargle (LS) periodograms, autocorrelation functions (ACFs), phase dispersion minimization (PDM) and Gaussian processes (GPs). It uses the astropy implementation of Lomb-Scargle periodograms, and the exoplanet implementation of fast celerite Gaussian processes. starspot is compatible with any light curve with time, flux and flux uncertainty measurements, including Kepler, K2 and TESS light curves. If your light curve is has evenly-spaced (or close to evenly-spaced) observations, all three of these methods: LS periodograms, ACFs and GPs will be applicable. For unevenly spaced light curves like those from the Gaia, or ground-based observatories, LS periodograms and GPs are preferable to ACFs. Example usage¶ import numpy as np import starspot as ss # Generate some data time = np.linspace(0, 100, 10000) period = 10 w = 2*np.pi/period flux = np.sin(w*time) + np.random.randn(len(time))*1e-2 + \ np.random.randn(len(time))*.01 flux_err = np.ones_like(flux)*.01 rotate = ss.RotationModel(time, flux, flux_err) # Calculate the Lomb Scargle periodogram period (highest peak in the periodogram). lomb_scargle_period = rotate.ls_rotation() # Calculate the autocorrelation function (ACF) period (highest peak in the ACF). # This is for evenly sampled data only -- time between observations is 'interval'. acf_period = rotate.acf_rotation(interval=np.diff(time)[0]) # Calculate the phase dispersion minimization period (period of lowest dispersion). period_grid = np.linspace(5, 20, 1000) pdm_period = rotate.pdm_rotation(period_grid) print(lomb_scargle_period, acf_period, pdm_period) >> 9.99892010582963 10.011001100110011 10.0 # Calculate a Gaussian process rotation period gp_period = rotate.GP_rotation() User Guide¶ License & attribution¶ The source code is made available under the terms of the MIT license. If you make use of this code, please cite this package and its dependencies. You can find more information about how and what to cite in the citation documentation.
https://starspot.readthedocs.io/en/latest/
2021-01-15T18:06:44
CC-MAIN-2021-04
1610703495936.3
[]
starspot.readthedocs.io
If the body is at least this close to another body, this body will consider them to be colliding. Lock the body's X axis movement. Lock the body's Y axis movement. Lock the body's Z axis movement. Method Descriptions¶ Returns true if the specified axis is locked. See also move_lock_x, move_lock_y and move_lock_z. is on the ceiling. Only updates when calling move_and_slide or move_and_slide_with_snap. Returns true if the body is on the floor. Only updates when calling move_and_slide or move_and_slide_with_snap. Returns true if the body is on a wall. Only updates when calling move_and_slide or move_and_slide_with_snap. - occur.
https://docs.godotengine.org/zh_CN/stable/classes/class_kinematicbody.html
2021-01-15T17:58:22
CC-MAIN-2021-04
1610703495936.3
[]
docs.godotengine.org
Visual Structure This section defines terms and concepts used in the scope of RadBook you have to get familiar with prior to continue reading this help. Below you can see a snapshot and explanation of the main visual elements of the RadBook control. The structure of a RadBook consists of the following main elements: Left Page: This is the page that is rendered on the left side of the book. Right Page: This is the page that is rendered on the right side of the book. Page Fold: This is the part from the page that is dragged when the page is flipping.
https://docs.telerik.com/devtools/wpf/controls/radbook/visual-structure
2021-01-15T19:03:46
CC-MAIN-2021-04
1610703495936.3
[array(['images/book_visuals_wpf.png', 'RadBook Visual structure'], dtype=object) ]
docs.telerik.com
# [Ride v5] Strict Variable ⚠️ This is the documentation for the Standard Library version 5, which is currently available for Stagenet only. Go to Mainnet version strict keyword defines a variable with eager evaluation. Unlike lazy variables defined with let, a strict variable is evaluated immediately when script execution reaches it, that is, before the next expression. Strict variable can be defined only inside another definition, for example, inside the body of a function. A strict variable will not be evaluated if it is defined inside another definition that is not used: for example, inside a function that has not been called. Like lazy variables, strict variables are immutable. Strict variables are suitable for dApp-to-dApp invocation as they ensure executing callable functions and applying their actions in the right order. Example: func foo() = { ... strict balanceBefore = wavesBalance(this).regular strict z = Invoke(dapp2,bar,args,[AttachedPayment(unit,100000000)]) strict balanceAfter = wavesBalance(this).regular if(balanceAfter < balanceBefore) then ... else... } In this example, balanceBefore and balanceAfter may differ because payments to dApp2 and actions performed by the bar callable function can affect the balance.
https://docs.waves.tech/en/ride/v5/variables
2021-01-15T19:33:58
CC-MAIN-2021-04
1610703495936.3
[]
docs.waves.tech
condor_vacate¶ Vacate jobs that are running on the specified hosts Synopsis¶ condor_vacate [-help | -version ] condor_vacate [-graceful | -fast ] [-debug ] [-pool centralmanagerhostname[:portnumber]] [ -name hostname | hostname | -addr “<a.b.c.d:port>” | “<a.b.c.d:port>” | -constraint expression | -all ] Description¶ condor_vacate causes HTCondor to checkpoint any running jobs on a set of machines and force the jobs to vacate the machine. The job(s) remains in the submitting machine’s job queue. Given the (default) -graceful option, jobs are killed and HTCondor restarts the job from the beginning somewhere else. condor_vacate has¶ -¶ condor_vacate will exit with a status value of 0 (zero) upon success, and it will exit with the value 1 (one) upon failure. Examples¶ To send a condor_vacate command to two named machines: $ condor_vacate robin cardinal To send the condor_vacate_vacate -pool condor.cae.wisc.edu -name cae17
https://htcondor.readthedocs.io/en/v8_9_9/man-pages/condor_vacate.html
2021-01-15T17:41:07
CC-MAIN-2021-04
1610703495936.3
[]
htcondor.readthedocs.io
Changelog for package hector_quadrotor_gazebo_plugins 0.3.5 (2015-03-28) 0.3.4 (2015-02-22) added dynamic_reconfigure server to gazebo_ros_baro plugin See for the equivalent commit in hector_gazebo_plugins. publish propulsion and aerodynamic wrench as WrenchStamped This is primarily for debugging purposes. The default topic for the propulsion plugin has been changed to propulsion/wrench. disabled detection of available plugins in cmake The aerodynamics and propulsion plugins are built unconditinally now in hector_quadrotor_gazebo_plugins and the detection is obsolete. Additionally we used platform-specific library prefixes and suffixes in find_libary() which caused errors on different platforms. Contributors: Johannes Meyer 0.3.3 (2014-09-01) fixed some compiler warnings and missing return values added separate update timer for MotorStatus output in propulsion plugin Contributors: Johannes Meyer 0.3.2 (2014-03-30) 0.3.1 (2013-12-26) disabled separate queue thread for the aerodynamics plugin fixed configuration namespace and plugin cleanup aerodynamics plugin should apply forces and torques in world frame accept hector_uav_msgs/MotorCommand messages directly for the propulsion model/plugin deleted deprecated export section from package.xml abort with a fatal error if ROS is not yet initialized + minor code cleanup fixed commanded linear z velocity upper bound for auto shutdown in simple controller plugin improved auto shutdown to prevent shutdowns while airborne added motor engage/shutdown, either automatically (default) or using ROS services /engage and /shutdown (std_srvs/Empty) using ROS parameters to configure state topics use controller_manager in gazebo_ros_control instead of running standalone pose_controller Contributors: Johannes Meyer 0.3.0 (2013-09-11) Catkinized stack hector_quadrotor and integrated hector_quadrotor_demo package from former hector_quadrotor_apps stack added wrench publisher to the quadrotor_simple_controller plugin created new package hector_quadrotor_model and moved all gazebo plugins to hector_quadrotor_gazebo_plugins
http://docs.ros.org/en/indigo/changelogs/hector_quadrotor_gazebo_plugins/changelog.html
2021-01-15T18:31:23
CC-MAIN-2021-04
1610703495936.3
[]
docs.ros.org
` ` Консоль позволяет опытным пользователям повысить продуктивность и выполнение более сложных действий, которые не могут быть выполнены при использовании любого другого GUI (ГИП - графический интерфейс пользователя). Разные алгоритмы могут быть определены используя строчно-командный интерфейс, а также, дополнительные действия, такие как: цикличность и условные команды могут быть добавлены для более высокой гибкости и продуктивного рабочего процесса.. Код, выполняемый в консоли Python, даже если он не вызывает ни одного алгоритма платформы геообработки, может быть преобразован в новый алгоритм, который в дальнейшем может вызываться из панели инструментов или использоваться в редакторе моделей, как любой другой алгоритм. Более того, некоторые алгоритмы, которые вы видите в панели инструментов, на самом деле являются обычными скриптами. In this section, we will see how to use processing algorithms from the QGIS Python console, and also how to write algorithms using Python. ервое, что нужно сделать при использовании платформы геообработки из командной строки — импортировать модуль processing: >>>. имя поля. Регистрозависимое название поля атрибутивной таблицы Fixed Table. Type the list of all table values separated by commas (,) and enclosed between quotes ("). Values start on the upper row and go from left to right. You can also use a 2-D array of values representing the table. система координат. Указывается код EPSG нужной системы координат охват. Значения xmin, xmax, ymin и ymax, разделенные запятыми (,). Логические, строковые и числовые значения, а также пути к файлам в дополнительных пояснениях не нуждаются.. Синтаксис идентичек описанному выше, в дополнение доступна глобальная переменная alg, являющаяся алгоритмом, которы был (или будет) выполнен. In the General group of the processing configuration dialog, you will find two entries named Pre-execution script file and Post-execution script file where the filename of the scripts to be run in each case can be entered.
https://docs.qgis.org/2.18/ru/docs/user_manual/processing/console.html
2021-01-15T18:56:43
CC-MAIN-2021-04
1610703495936.3
[]
docs.qgis.org
This part of the documentation contains guides that will help you configure and manage notifications in Telestream Cloud using our web console. This is where you can find detailed guides on: - how to enable notifications using delivery options available in Telestream Cloud - how to remove or edit notifications Let's get started. Updated 8 months ago
https://docs.telestream.dev/docs/user-guides-notifications
2021-01-15T17:38:12
CC-MAIN-2021-04
1610703495936.3
[]
docs.telestream.dev
Policy Configuration for Execute Hosts and for Submit Hosts¶ Note Configuration templates make it easier to implement certain policies; see information on policy templates here: Available Configuration Templates. condor_startd Policy Configuration¶ This section describes the configuration of machines, such that they, through the condor_startd daemon, implement a desired policy for when remote jobs should start, be suspended, (possibly) resumed, vacate (with a checkpoint) or be killed. This policy is the heart of HTCondor’s balancing act between the needs and wishes of resource owners (machine owners) and resource users (people submitting their jobs to HTCondor). Please read this section carefully before changing any of the settings described here, as a wrong setting can have a severe impact on either the owners of machines in the pool or the users of the pool. condor_startd Terminology¶ Understanding the configuration requires an understanding of ClassAd expressions, which are detailed in the HTCondor’s ClassAd Mechanism section. START Expression¶ theh HTCondor’s ClassAd Mechanism section =?= FALSE See a detailed discussion of the IS_OWNER expression in condor_startd Policy Configuration.. The RANK Expression¶ize This RANK does not work if a job is submitted with an image size of more 1012 Kbytes. However, with that size, this RANK expression preferring that job would not be HTCondor’s only problem! Machine States¶ A machine is assigned a state by HTCondor. The state depends on whether or not the machine is available to run HTCondor jobs, and if so, what point in the negotiations has been reached. The possible states are - Owner - The machine is being used by the machine owner, and/or is not available to run HTCondor jobs. When the machine first starts up, it begins in this state. - Unclaimed - The machine is available to run HTCondor jobs, but it is not currently doing so. - Matched - The machine is available to run jobs, and it has been matched by the negotiator with a specific schedd. That schedd just has not yet claimed this machine. In this state, the machine is unavailable for further matches. - Claimed - The machine has been claimed by a schedd. - Preempting - The machine was claimed by a schedd, but is now preempting that claim for one of the following reasons. - the owner of the machine came back - another user with higher priority has jobs waiting to run - another request that this resource would rather serve was found - Backfill - The machine is running a backfill computation while waiting for either the machine owner to come back or to be matched with an HTCondor job. This state is only entered if the machine is specifically configured to enable backfill jobs. - Drained - The machine is not running jobs, because it is being drained. One reason a machine may be drained is to consolidate resources that have been divided in a partitionable slot. Consolidating the resources gives large jobs a chance to run. Each transition is labeled with a letter. The cause of each transition is described below. Transitions out of the Owner state - A The machine switches from Owner to Unclaimed whenever the STARTexpression no longer locally evaluates to FALSE. This indicates that the machine is potentially available to run an HTCondor job. - N The machine switches from the Owner to the Drained state whenever draining of the machine is initiated, for example by condor_drain or by the condor_defrag daemon. Transitions out of the Unclaimed state - B The machine switches from Unclaimed back to Owner whenever the STARTexpression locally evaluates to FALSE. This indicates that the machine is unavailable to run an HTCondor job and is in use by the resource owner. - C The transition from Unclaimed to Matched happens whenever the condor_negotiator matches this resource with an HTCondor job. - D The transition from Unclaimed directly to Claimed also happens if the condor_negotiator matches this resource with an HTCondor job. In this case the condor_schedd receives the match and initiates the claiming protocol with the machine before the condor_startd receives the match notification from the condor_negotiator. - E The transition from Unclaimed to Backfill happens if the machine is configured to run backfill computations (see the Setting Up for Special Environments section) and the START_BACKFILLexpression evaluates to TRUE. - P The transition from Unclaimed to Drained happens if draining of the machine is initiated, for example by condor_drain or by the condor_defrag daemon. Transitions out of the Matched state - F The machine moves from Matched to Owner if either the STARTexpression locally evaluates to FALSE, or if the MATCH_TIMEOUTtimer expires. This timeout is used to ensure that if a machine is matched with a given condor_schedd, but that condor_schedd does not contact the condor_startd to claim it, that the machine will give up on the match and become available to be matched again. In this case, since the STARTexpression does not locally evaluate to FALSE, as soon as transition F is complete, the machine will immediately enter the Unclaimed state again (via transition A). The machine might also go from Matched to Owner if the condor_schedd attempts to perform the claiming protocol but encounters some sort of error. Finally, the machine will move into the Owner state if the condor_startd receives a condor_vacate command while it is in the Matched state. - G The transition from Matched to Claimed occurs when the condor_schedd successfully completes the claiming protocol with the condor_startd. Transitions out of the Claimed state - H From the Claimed state, the only possible destination is the Preempting state. This transition can be caused by many reasons: The condor_schedd that has claimed the machine has no more work to perform and releases the claim The PREEMPTexpression evaluates to True(which usually means the resource owner has started using the machine again and is now using the keyboard, mouse, CPU, etc.) The condor_startd receives a condor_vacate command The condor_startd is told to shutdown (either via a signal or a condor_off command) The resource is matched to a job with a better priority (either a better user priority, or one where the machine rank is higher) Transitions out of the Preempting state - I The resource will move from Preempting back to Claimed if the resource was matched to a job with a better priority. - J The resource will move from Preempting to Owner if the PREEMPTexpression had evaluated to TRUE, if condor_vacate was used, or if the STARTexpression locally evaluates to FALSE when the condor_startd has finished evicting whatever job it was running when it entered the Preempting state. Transitions out of the Backfill state - K The resource will move from Backfill to Owner for the following reasons: The EVICT_BACKFILLexpression evaluates to TRUE The condor_startd receives a condor_vacate command The condor_startd is being shutdown - L The transition from Backfill to Matched occurs whenever a resource running a backfill computation is matched with a condor_schedd that wants to run an HTCondor job. - M The transition from Backfill directly to Claimed is similar to the transition from Unclaimed directly to Claimed. It only occurs if the condor_schedd completes the claiming protocol before the condor_startd receives the match notification from the condor_negotiator. Transitions out of the Drained state - O The transition from Drained to Owner state happens when draining is finalized or is canceled. When a draining request is made, the request either asks for the machine to stay in a Drained state until canceled, or it asks for draining to be automatically finalized once all slots have finished draining. The Claimed State and Leases¶. Machine Activities¶ Within some machine states, activities of the machine are defined. The state has meaning regardless of activity. Differences between activities are significant. Therefore, a “state/activity” pair describes a machine. The following list describes all the possible state/activity pairs. Owner - Idle This is the only activity for Owner state. As far as HTCondor is concerned the machine is Idle, since it is not doing anything for HTCondor. Unclaimed - Idle This is the normal activity of Unclaimed machines. The machine is still Idle in that the machine owner is willing to let HTCondor jobs run, but HTCondor is not using the machine for anything. - Benchmarking The machine is running benchmarks to determine the speed on this machine. This activity only occurs in the Unclaimed state. How often the activity occurs is determined by the RUNBENCHMARKSexpression. Matched - Idle When Matched, the machine is still Idle to HTCondor. Claimed - Idle In this activity, the machine has been claimed, but the schedd that claimed it has yet to activate the claim by requesting a condor_starter to be spawned to service a job. The machine returns to this state (usually briefly) when jobs (and therefore condor_starter) finish. - Busy Once a condor_starter has been started and the claim is active, the machine moves to the Busy activity to signify that it is doing something as far as HTCondor is concerned. - Suspended If the job is suspended by HTCondor, the machine goes into the Suspended activity. The match between the schedd and machine has not been broken (the claim is still valid), but the job is not making any progress and HTCondor is no longer generating a load on the machine. - Retiring When an active claim is about to be preempted for any reason, it enters retirement, while it waits for the current job to finish. The MaxJobRetirementTimeexpression determines how long to wait (counting since the time the job started). Once the job finishes or the retirement time expires, the Preempting state is entered. Preempting The Preempting state is used for evicting an HTCondor job from a given machine. When the machine enters the Preempting state, it checks the WANT_VACATEexpression to determine its activity. - Vacating In the Vacating activity, the job that was running is in the process of checkpointing. As soon as the checkpoint process completes, the machine moves into either the Owner state or the Claimed state, depending on the reason for its preemption. - Killing Killing means that the machine has requested the running job to exit the machine immediately, without checkpointing. Backfill - Idle The machine is configured to run backfill jobs and is ready to do so, but it has not yet had a chance to spawn a backfill manager (for example, the BOINC client). - Busy The machine is performing a backfill computation. - Killing The machine was running a backfill computation, but it is now killing the job to either return resources to the machine owner, or to make room for a regular HTCondor job. Drained - Idle All slots have been drained. - Retiring This slot has been drained. It is waiting for other slots to finish draining. The following diagram. State and Activity Transitions¶. Owner State¶ =?= FALSE So, the machine will remain in the Owner state as long as the START expression locally evaluates to FALSE. The condor_startd Policy Configuration section the POLICY : Desktop configuration template is in use. If the START expression is START = KeyboardIdle > 15 * $(MINUTE) && Owner == "coltrane" and if KeyboardIdle is 34 seconds, then the machine would remain in the Owner state. Owner is undefined, and anything && FALSE is FALSE. If, however, the START expression is START = KeyboardIdle > 15 * $(MINUTE) || Owner == "coltrane" and KeyboardIdle is 34 seconds, then the machine leaves the Owner state and becomes Unclaimed. This is because FALSE || UNDEFINED TRUE. With the POLICY : Desktop configuration template,). Unclaimed State¶ =?= FALSE so the Setting Up for Special Environments section),). Matched State¶). Claimed State¶, depending on the universe of the job running on the claim: vanilla, and all others. The normal expressions look like the. If suspending the job for a short while does not satisfy the machine owner (the owner is still using the machine after a specific period of time), the startd moves on to vacating the job.: - Claimed/Idle If the starter that is serving a given job exits (for example because the jobs completes), the machine will go to Claimed/Idle (transition 12). Claimed/Retiring If WANT_SUSPENDis FALSE and the PREEMPTexpression is True, the machine enters the Retiring activity (transition 13). From there, it waits for a configurable amount of time for the job to finish before moving on to preemption. Another reason the machine would go from Claimed/Busy to Claimed/Retiring is if the condor_negotiator matched the machine with a “better” match. This better match could either be from the machine’s perspective using the startd RANKexpression, or it could be from the negotiator’s perspective due to a job with a higher user priority. Another case resulting in a transition to Claimed/Retiring is when the startd is being shut down. The only exception is a “fast” shutdown, which bypasses retirement completely. - Claimed/Suspended If both the WANT_SUSPENDand SUSPENDexpressions evaluate to TRUE, the machine suspends the job (transition 14).: - Claimed/Busy If the CONTINUEexpression evaluates to TRUE, the machine resumes the job and enters the Claimed/Busy state (transition 15) or the Claimed/Retiring state (transition 16), depending on whether the claim has been preempted. - Claimed/Retiring If the PREEMPTexpression is TRUE, the machine will enter the Claimed/Retiring activity (transition 16). - Preempting If the claim is in suspended retirement and the retirement time expires, the job enters the Preempting state (transition 17). This is only possible if MaxJobRetirementTimedecreases during the suspension. For the Claimed/Retiring state, the following transitions may occur: - Preempting If the job finishes or the job’s run time exceeds the value defined for the job ClassAd attribute MaxJobRetirementTime, the Preempting state is entered (transition 18). The run time is computed from the time when the job was started by the startd minus any suspension time. When retiring due to condor_startd daemon shutdown or restart, it is possible for the administrator to issue a peaceful shutdown command, which causes MaxJobRetirementTimeto effectively be infinite, avoiding any killing of jobs. It is also possible for the administrator to issue a fast shutdown command, which causes MaxJobRetirementTimeto be effectively 0. - Claimed/Busy If the startd was retiring because of a preempting claim only and the preempting claim goes away, the normal Claimed/Busy state is resumed (transition 19). If instead the retirement is due to owner activity ( PREEMPT) or the startd is being shut down, no unretirement is possible. - Claimed/Suspended In exactly the same way that suspension may happen from the Claimed/Busy state, it may also happen during the Claimed/Retiring state (transition 20). In this case, when the job continues from suspension, it moves back into Claimed/Retiring (transition 16) instead of Claimed/Busy (transition 15). Preempting State¶ (condor_startd Configuration File Macros). ‘s). Backfill State¶ The Backfill state is used whenever the machine is performing low priority background tasks to keep itself busy. For more information about backfill support in HTCondor, see the Configuring HTCondor for Running Backfill Jobs section.). Drained State¶). State/Activity Transition Expression Summary¶ This section is a summary of the information from the previous sections. It serves as a quick reference. START When TRUE, the machine is willing to spawn a remote HTCondor job. RUNBENCHMARKS While in the Unclaimed state, the machine will run benchmarks whenever TRUE. MATCH_TIMEOUT If the machine has been in the Matched state longer than this value, it will transition to the Owner state. WANT_SUSPEND If True, the machine evaluates the SUSPENDexpression to see if it should transition to the Suspended activity. If any value other than True, the machine will look at the PREEMPTexpression. SUSPEND If WANT_SUSPENDis True, and the machine is in the Claimed/Busy state, it enters the Suspended activity if SUSPENDis True. CONTINUE If the machine is in the Claimed/Suspended state, it enter the Busy activity if CONTINUEis True. PREEMPT If the machine is either in the Claimed/Suspended activity, or is in the Claimed/Busy activity and WANT_SUSPENDis FALSE, the machine enters the Claimed/Retiring state whenever PREEMPTis TRUE. CLAIM_WORKLIFE This expression specifies the number of seconds after which a claim will stop accepting additional jobs. This configuration macro is fully documented here: condor_startd Configuration File Macros. MachineMaxVacateTime When the machine enters the Preempting/Vacating state, this expression specifies the maximum time in seconds that the condor_startd will wait for the job to finish. The job may adjust the wait time by setting JobMaxVacateTime. If the job’s setting is less than the machine’s, the job’s. Once the vacating time expires, the job is hard-killed. The KILLexpression may be used to abort the graceful shutdown of the job at any time. MAXJOBRETIREMENTTIME If the machine is in the Claimed/Retiring state, jobs which job may provide its own expression for MaxJobRetirementTime, but this can only be used to take less than the time granted by the condor_startd, never more. For convenience, nice_user jobs are submitted with a default retirement time of 0, so they will never wait in retirement unless the user overrides the default.. WANT_VACATE This is checked only when the PREEMPTexpression is Trueand the machine enters the Preempting state. If WANT_VACATEis True, the machine enters the Vacating activity. If it is False, the machine will proceed directly to the Killing activity. KILL If the machine is in the Preempting/Vacating state, it enters Preempting/Killing whenever KILLis True. KILLING_TIMEOUT If the machine is in the Preempting/Killing state for longer than KILLING_TIMEOUTseconds, the condor_startd sends a SIGKILL to the condor_starter and all its children to try to kill the job as quickly as possible. PERIODIC_CHECKPOINT If the machine is in the Claimed/Busy state and PERIODIC_CHECKPOINTis TRUE, the user’s job begins a periodic checkpoint. RANK If this expression evaluates to a higher number for a pending resource request than it does for the current request, the machine may preempt the current request (enters the Preempting/Vacating state). When the preemption is complete, the machine enters the Claimed/Idle state with the new resource request claiming it. START_BACKFILL When TRUE, if the machine is otherwise idle, it will enter the Backfill state and spawn a backfill computation (using BOINC). EVICT_BACKFILL When TRUE, if the machine is currently running a backfill computation, it will kill the BOINC client and return to the Owner/Idle state. Examples of Policy Configuration¶ This section describes various policy configurations, including the default policy.. StateTimer Amount of time in seconds in the current state. ActivityTimer Amount of time in seconds in the current activity. ActivationTimer Amount of time in seconds that the job has been running on this machine. LastCkpt Amount of time since the last periodic checkpoint. NonCondorLoadAvg The difference between the system load and the HTCondor load (the load generated by everything but HTCondor). BackgroundLoad Amount of background load permitted on the machine and still start an HTCondor job. HighLoad If the $(NonCondorLoadAvg)goes over this, the CPU is considered too busy, and eviction of the HTCondor job should start. StartIdleTime Amount of time the keyboard must to be idle before HTCondor will start a job. ContinueIdleTime Amount of time the keyboard must to be idle before resumption of a suspended job. MaxSuspendTime Amount of time a job may be suspended before more drastic measures are taken. KeyboardBusy A boolean expression that evaluates to TRUE when the keyboard is being used. CPUIdle A boolean expression that evaluates to TRUE when the CPU is idle. CPUBusy A boolean expression that evaluates to TRUE when the CPU is busy. MachineBusy The CPU or the Keyboard is busy. CPUIsBusy A boolean value set to the same value as CPUBusy. CPUBusyTime The value 0 if CPUBusyis False; the time in seconds since CPUBusybecame True.)) Test-job Policy. Time of Day Policy. Desktop/Non-Desktop Policy. Disabling and Enabling Preemption. Job Suspension As new jobs are submitted that receive a higher priority than currently executing jobs, the executing jobs may be preempted. If the preempted jobs are not capable of writing checkpoints, they lose whatever forward progress they have made, and are sent back to the job queue to await starting over again as another machine becomes available. An alternative to this is to use suspension to freeze the job while some other task runs, and then unfreeze it so that it can continue on from where it left off. This does not require any special handling in the job, unlike most strategies that take checkpoints. However, it does require a special configuration of HTCondor. This example implements a policy that allows the job to decide whether it should be evicted or suspended. The jobs announce their choice through the use of the invented job ClassAd attribute IsSuspendableJob, that is also utilized in the configuration. The implementation of this policy utilizes two categories of slots, identified as suspendable or nonsuspendable. A job identifies which category of slot it wishes to run on. This affects two aspects of the policy: Of two jobs that might run on a slot, which job is chosen. The four cases that may occur depend on whether the currently running job identifies itself as suspendable or nonsuspendable, and whether the potentially running job identifies itself as suspendable or nonsuspendable. If the currently running job is one that identifies itself as suspendable, and the potentially running job identifies itself as nonsuspendable, the currently running job is suspended, in favor of running the nonsuspendable one. This occurs independent of the user priority of the two jobs. If both the currently running job and the potentially running job identify themselves as suspendable, then the relative priorities of the users and the preemption policy determines whether the new job will replace the existing job. If both the currently running job and the potentially running job identify themselves as nonsuspendable, then the relative priorities of the users and the preemption policy determines whether the new job will replace the existing job. If the currently running job is one that identifies itself as nonsuspendable, and the potentially running job identifies itself as suspendable, the currently running job continues running. What happens to a currently running job that is preempted. A job that identifies itself as suspendable will be suspended, which means it is frozen in place, and will later be unfrozen when the preempting job is finished. A job that identifies itself as nonsuspendable is evicted, which means it writes a checkpoint, when possible, and then is killed. The job will return to the idle state in the job queue, and it can try to run again in the future. # Configuration for Interactive Jobs Policy may be set based on whether a job is an interactive one or not. Each interactive job has the job ClassAd attribute InteractiveJob = True) ) Multi-Core Machine Terminology¶ condor_startd Policy Configuration. Dividing System Resources in Multi-core Machines¶ Within a machine the shared system resources of cores, RAM, swap space and disk space will be divided for use by the slots. There are two main ways to go about dividing the resources of a multi-core machine: - Evenly divide all resources. By default, the condor_startd will automatically divide the machine into slots, placing one core in each slot, and evenly dividing all shared resources among the slots. The only specification may be how many slots are reported at a time. By default, all slots are reported to HTCondor. How many slots are reported at a time is accomplished by setting the configuration variable NUM_SLOTSto the integer number of slots desired. If variable NUM_SLOTSis not defined, it defaults to the number of cores within the machine. Variable NUM_SLOTSmay not be used to make HTCondor advertise more slots than there are cores on the machine. The number of cores is defined by NUM_CPUS. - Define slot types. Instead of an even division of resources per slot, the machine may have definitions of slot types, where each type is provided with a fraction of shared system resources. Given the slot type definition, control how many of each type are reported at any given time with further configuration.: A simple fraction, such as 1/4 A simple percentage, such as 25% A comma-separated list of attributes, with a percentage, fraction, numerical value, or autofor each one. A comma-separated list that includes a blanket value that serves as a default for any resources not explicitly specified in the list. A simple fraction or percentage describes the allocation of the total system resources, including the number of CPUS or cores. A comma separated list allows a fine tuning of the amounts for specific resources.or SLOT<N>_EXECUTEdirectory.: Cpus, C, c, cpu ram, RAM, MEMORY, memory, Mem, R, r, M, m disk, Disk, D, d swap, SWAP, S, s, VirtualMemory, V, vfor_NAMESto condor_startd Policy Configuration.: Total<name>: the total quantity of the resource identified by <name> Detected<name>: the quantity detected of the resource identified by <name>; this attribute is currently equivalent to Total<name> TotalSlot<name>: the quantity of the resource identified by <name>allocated to this slot <name>: the amount of the resource identified by <name>available to be used on this slot From the example given, the gpuresource for that change to take effect. Configuration Specific to Multi-core Machines¶. SLOTS_CONNECTED_TO_CONSOLE, with definition at the condor_startd Configuration File Macros section SLOTS_CONNECTED_TO_KEYBOARD, with definition at the condor_startd Configuration File Macros section DISCONNECTED_KEYBOARD_IDLE_BOOST, with definition at the condor_startd Configuration File Macros section. The configuration file specifies policy expressions that are shared by all of the slots on the machine. Each slot reads the configuration file and sets up its own machine ClassAd. Each slot is now separate from the others. It has a different ClassAd attribute State, a different machine ClassAd, and if there is a job running, a separate job ClassAd. Each slot periodically evaluates the policy expressions, changing its own state as necessary. This occurs independently of the other slots on the machine. So, if the condor_startd daemon is evaluating a policy expression on a specific slot, and the policy expression refers to ProcID, Owner, or any attribute from a job ClassAd, it always refers to the ClassAd of the job running on the specific slot.. Load Average for Multi-core Machines¶. Debug Logging in the Multi-Core condor_startd Daemon¶. Configuring GPUs¶ HTCondor supports incorporating GPU resources and making them available for jobs. First, GPUs must be detected as available resources. Then, machine ClassAd attributes advertise this availability. Both detection and advertisement are accomplished by having this configuration for each execute machine that has GPUs: = -extra causes the condor_gpu_discovery tool to output more attributes that describe the detected GPUs on the machine. Configuring STARTD_ATTRS on a per-slot basis¶ The STARTD_ATTRS (and legacy STARTD_EXPRS) settings can be configured on a per-slot basis. The condor_startd daemon builds the list of items to advertise by combining the lists in this order: STARTD_ATTRS STARTD_EXPRS SLOT<N>_STARTD_ATTRS SLOT<N>_STARTD_EXPRS slot1: favorite_color = "blue" favorite_season = "spring" slot2: favorite_color = "green" favorite_season = "spring" slot3: favorite_color = "blue" favorite_season = "summer" Dynamic Provisioning: Partitionable and Dynamic Slots¶: cpu = 10 memory = 10240 disk = BIG Assume that JobA is allocated to this slot. JobA includes the following requirements: cpu = 3 memory = 1024 disk = 10240 The portion of the slot that is carved out is now known as a dynamic slot. This dynamic slot has its own machine ClassAd, and its Name attribute distinguishes itself as a dynamic slot with incorporating the substring Slot1_1. After allocation, the partitionable Slot1 advertises that it has the following resources still available: cpu = 7 memory = 9216 disk = BIG-10240 As each new job is allocated to Slot1, it breaks into Slot1_1, Slot1_2, Slot1_3 etc., until the entire set of Slot1’s available resources have been consumed by jobs.: request_cpus request_memory request_disk (in kilobytes) condor_startd Policy Configuration.. Defaults for Partitionable Slot Sizes¶ JOB_DEFAULT_REQUESTMEMORY JOB_DEFAULT_REQUESTDISK JOB_DEFAULT_REQUESTCPUS The value of these variables can be ClassAd expressions. The default values for these variables, should they not be set are JOB_DEFAULT_REQUESTMEMORY= ifThenElse(MemoryUsage =!= UNDEFINED, MemoryUsage, 1) JOB_DEFAULT_REQUESTCPUS= 1 JOB_DEFAULT_REQUESTDISK= DiskUsage MODIFY_REQUEST_EXPR_REQUESTCPUS= quantize(RequestCpus, {1}) MODIFY_REQUEST_EXPR_REQUESTMEMORY= quantize(RequestMemory, {128}) MODIFY_REQUEST_EXPR_REQUESTDISK= quantize(RequestDisk, {1024}) condor_negotiator-Side Resource Consumption Policies¶ For partitionable slots, the specification of a consumption policy permits matchmaking at the negotiator. A dynamic slot carved from the partitionable slot acquires the required quantities of resources, leaving the partitionable slot with the remainder. This differs from scheduler matchmaking in that multiple jobs can match with the partitionable slot during a single negotiation cycle. All specification of the resources available is done by configuration of the partitionable slot. The machine is identified as having a resource consumption policy enabled with CONSUMPTION_POLICY = True A Defragmenting Dynamic Slots¶ When partitionable slots are used, some attention must be given to the problem of the starvation of large jobs due to the fragmentation of resources. The problem is that over time the machine resources may become partitioned into slots suitable only the condor_defrag Configuration File Macros section.). By default, reduce these costs, you may set the configuration macro DEFRAG_DRAINING_START_EXPR . If draining gracefully, the defrag daemon will set the START expression for the machine to this value expression. Do not set this to your usual START expression; jobs accepted while draining will not be given their MaxRetirementTime. Instead, when the last retiring job finishes (either terminates or runs out of retirement time), all other jobs on machine will be evicted with a retirement time of 0. (Those jobs will be given their MaxVacateTime, as usual.) The machine’s START expression will become FALSE and stay that way until - as usual - the machine exits the draining state. We recommend that you allow only interruptible jobs to start on draining machines. Different pools may have different ways of denoting interruptible, but a MaxJobRetirementTime of 0 is probably a good sign. You may also want to restrict the interruptible jobs’ MaxVacateTime to ensure that the machine will complete draining quickly. the Defrag ClassAd Attributes section. The following command may be used to view the condor_defrag daemon ClassAd: condor_status -l -any -constraint 'MyType == "Defrag"' condor_schedd Policy Configuration¶ There are two types of schedd policy: job transforms (which change the ClassAd of a job at submission) and submit requirements (which prevent some jobs from entering the queue). These policies are explained below. Job Transforms¶ the The HTCondor Job Router section. Submit Requirements¶ The condor_schedd may reject job submissions, such that rejected jobs never enter the queue. Rejection may be best for the case in which there are jobs that will never be able to run; for instance, a job specifying an obsolete universe, like standard.. Submit Warnings¶.
https://htcondor.readthedocs.io/en/v8_9_8/admin-manual/policy-configuration.html
2021-01-15T17:44:25
CC-MAIN-2021-04
1610703495936.3
[]
htcondor.readthedocs.io
Time Scheduling for Job Execution¶ Jobs may be scheduled to begin execution at a specified time in the future with HTCondor’s job deferral functionality. All specifications are in a job’s submit description file. Job deferral functionality is expanded to provide for the periodic execution of a job, known as the CronTab scheduling. Job Deferral¶ Job deferral allows the specification of the exact date and time at which a job is to begin executing. HTCondor attempts to match the job to an execution machine just like any other job, however, the job will wait until the exact time to begin execution. A user can define the job to allow some flexibility in the execution of jobs that miss their execution time. Deferred Execution Time¶ A job’s deferral time is the exact time that HTCondor should attempt to execute the job. The deferral time attribute is defined as an expression that evaluates to a Unix Epoch timestamp (the number of seconds elapsed since 00:00:00 on January 1, 1970, Coordinated Universal Time). This is the time that HTCondor will begin to execute the job. After a job is matched and all of its files have been transferred to an execution machine, HTCondor checks to see if the job’s ClassAd contains a deferral time. If it does, HTCondor calculates the number of seconds between the execution machine’s current system time and the job’s deferral time. If the deferral time is in the future, the job waits to begin execution. While a job waits, its job ClassAd attribute JobStatus indicates the job is in the Running state. HTCond, HTCondor begins execution for the job, but immediately suspends it. The deferral time is specified in the job’s submit description file with the command deferral_time . Deferral Window¶ If a job arrives at its execution machine after the deferral time has passed,, HTCond. The deferral window is specified in the job’s submit description file with the command deferral_window . Preparation Time¶. Deferral Usage Examples¶ depends on the options provided within that flavor of Unix. In some, it appears as $ date --date "MM/DD/YYYY HH:MM:SS" +%s and in others, it appears as $ date -d "YYYY-MM-DD always waits 60 seconds after submission before beginning execution: deferral_time = (QDate + Deferral Limitations¶ There are some limitations to HTCondor’s job deferral feature. Job deferral is not available for scheduler universe jobs. A scheduler universe job defining the deferral_timeproduces a fatal error when submitted. The time that the job begins to execute is based on the execution machine’s system clock, and not the submission machine’s system clock. Be mindful of the ramifications when the two clocks show dramatically different times. A job’s JobStatusattribute is always in the Running state when job deferral is used. There is currently no way to distinguish between a job that is executing and a job that is waiting for its deferral time. CronTab Scheduling¶ HTCondor’s CronTab scheduling functionality allows jobs to be scheduled to execute periodically. A job’s execution schedule is defined by commands within the submit description file. The notation is much like that used by the Unix cron daemon. As such, HTCondor developers are fond of referring to CronTab scheduling as Crondor. The scheduling of jobs using HTCondor’s CronTab feature calculates and utilizes the DeferralTime ClassAd attribute. Also, unlike the Unix cron daemon, HTCondor never runs more than one instance of a job at the same time. The capability for repetitive or periodic execution of the job is enabled by specifying an on_exit_remove command for the job, such that the job does not leave the queue until desired. Semantics for CronTab Specification¶ A job’s execution schedule is defined by a set of specifications within the submit description file. HTCondor uses these to calculate a DeferralTime for the job. Table 2.3 lists the submit commands and acceptable values for these commands. At least one of these must be defined in order for HTCondor to calculate a DeferralTime for the job. Once one CronTab value is defined, the default for all the others uses all the values in the allowed values ranges. Table 2.3: The list of submit commands and their value. - The asterisk operator - The * . - Ranges - A range creates a set of integers from all the allowed values between two integers separated by a hyphen. The specified range is inclusive, and the integer to the left of the hyphen must be less than the right hand integer. For example,cron_hour = 0-4 represents the set of hours from 12:00 am (midnight) to 4:00 am, or (0,1,2,3,4). - Lists - A list is the union of the values or ranges separated by commas. Multiple entries of the same value are ignored. For example,cron_minute = 15,20,25,30 cron_hour = 0-3,9-12,15 where this cron_minute example represents (15,20,25,30) and cron_hour represents (0,1,2,3,9,10,11,12,15). - Steps - Steps select specific numbers from a range, based on an interval. A step is specified by appending a range or the asterisk operator with a slash character (/), followed by an integer value. For example,cron_minute = 10-30/5 cron_hour = */3 where this cron_minute example specifies every five minutes within the specified range to represent (10,15,20,25,30), and cron_hour specifies every three hours of the day to represent (0,3,6,9,12,15,18,21). Preparation Time and Execution Window¶ The cron_prep_time command is analogous to the deferral time’s deferral_prep_time command. It specifies the number of seconds before the deferral time that the job is to be matched and sent to the execution machine. This permits HTCondor to make necessary preparations before the deferral time occurs. Consider the submit description file example that includes cron_minute = 0 cron_hour = * cron_prep_time = 300 The job is scheduled to begin execution at the top of every hour. Note that the setting of cron_hour in this example is not required, as the default value will be *, specifying any and every hour of the day. The job will be matched and sent to an execution machine no more than five minutes before the next deferral time. For example, if a job is submitted at 9:30am, then the next deferral time will be calculated to be 10:00am. HTCond_minute = 0 cron_hour = * cron_window = 360. Scheduling¶ When a job using the CronTab functionality is submitted to HTCondor, use of at least one of the submit description file commands beginning with cron_ causes HTCond HTCondor operates on the job queue at times that are independent of job events, such as when job execution completes. Therefore, HTCondor may operate on the job queue just after a job’s deferral time states that it is to begin execution. HTCondor attempts to start a job when the following pseudo-code boolean expression evaluates to True: ( time() + SCHEDD_INTERVAL ) >= ( DeferralTime - CronPrepTime ) If the time() plus the number of seconds until the next time HTCondor checks the job queue is greater than or equal to the time that the job should be submitted to the execution machine, then the job is to be matched and sent now. Jobs using the CronTab functionality are not automatically re-queued by HTCondor after their execution is complete. The submit description file for a job must specify an appropriate on_exit_remove command to ensure that a job remains in the queue. This job maintains its original ClusterId and ProcId. Submit Commands Usage Examples¶ Submit Commands Limitations¶ The use of the CronTab functionality has all of the same limitations of deferral times, because the mechanism is based upon deferral times. It is impossible to schedule vanilla and standard universe jobs at intervals that are smaller than the interval at which HTCondor evaluates jobs. This interval is determined by the configuration variable SCHEDD_INTERVAL. As a vanilla or standard universe job completes execution and is placed back into the job queue, it may not be placed in the idle state in time. This problem does not afflict local universe jobs. HTCondor cannot guarantee that a job will be matched in order to make its scheduled deferral time. A job must be matched with an execution machine just as any other HTCondor job; if HTCondor is unable to find a match, then the job will miss its chance for executing and must wait for the next execution time specified by the CronTab schedule.
https://htcondor.readthedocs.io/en/v8_9_9/users-manual/time-scheduling-for-job-execution.html
2021-01-15T18:26:22
CC-MAIN-2021-04
1610703495936.3
[]
htcondor.readthedocs.io
QtQuick.Canvas requestAnimationFrame(callback) - requestPaint() - bool save(string filename) - } Currently the Canvas item only supports the two-dimensional rendering context. Threaded Rendering and Render Target The Canvas item supports two render targets: Canvas.Image and Canvas.FramebufferObject. The Canvas.Image render target is a QImage object. This render target supports background thread rendering, allowing complex or long running painting to be executed without blocking the UI. The Canvas.FramebufferObject render target utilizes OpenGL hardware acceleration rather than rendering into system memory, which in many cases results in faster rendering. Canvas.FramebufferObject relies on the OpenGL extensions GL_EXT_framebuffer_multisample and GL_EXT_framebuffer_blit for antialiasing. It will also use more graphics memory when rendering strategy is anything other than Canvas.Cooperative.. Pixel Operations All HTML5 2D context pixel operations are supported. In order to ensure improved pixel reading/writing performance the Canvas.Image render target should be chosen. The Canvas.FramebufferObject render target requires the pixel data to be exchanged between the system memory and the graphic card, which is significantly more expensive. Rendering may also be synchronized with the V-sync signal (to avoidscreen tearing) which will further impact pixel operations with Canvas.FrambufferObject render target. Tips for Porting Existing HTML5 Canvas Applications Although the Canvas item is provides. - Canvas.FramebufferObject - render to an OpenGL frame an image has been loaded. The corresponding handler is onImageLoaded. See also loadImage(). This signal is emitted when the region needs to be rendered. If a context is active it can be referenced from the context property. This signal can be triggered by markdirty(), requestPaint() or by changing the current canvas window. The corresponding handler is onPaint. This signal is emitted after all context painting commands are executed and the Canvas has been rendered.. If the context type is not supported or the canvas has previously been requested to provide a different and incompatible context type, null will be returned. Canvas only supports a 2d context. Returns true if the image failed to load. See also loadImage(). Returns true if the image is successfully loaded and ready to use. See also loadImage(). Returns true if the image is currently loading. See also loadImage(). Loads the given image asynchronously. When the image is ready, imageLoaded will be emitted. The loaded image can be unloaded by the unloadImage() method. Note: Only loaded images can be painted on the Canvas item. See also unloadImage, imageLoaded, isImageLoaded(), Context2D::createImageData(), and Context2D::drawImage(). Mark the given area as dirty, so that when this area is visible the canvas renderer will redraw it. This will trigger the paint signal. See also paint and requestPaint(). This function schedules callback to be invoked before composing the Qt Quick scene. Request the entire visible region be re-drawn. See also markDirty(). Save the current canvas content into an image file filename. The saved image format is automatically decided by the filename's suffix. Note: calling this method will force painting the whole canvas, not just the current canvas visible window. See also canvasWindow, canvasSize, and toDataURL(). Returns a data URL for the image in the canvas. The default mimeType is "image/png". Unloads the image. Once an image is unloaded it cannot be painted by the canvas context unless it is loaded again. See also loadImage(), imageLoaded, isImageLoaded(), Context2D::createImageData(), and Context2D::drawImage.
https://phone.docs.ubuntu.com/en/apps/api-qml-current/QtQuick.Canvas
2021-01-15T18:08:42
CC-MAIN-2021-04
1610703495936.3
[]
phone.docs.ubuntu.com
Rebasing a pull request¶¶ Before you can rebase your PR, you need to make sure you have the proper remotes configured. you’re up-to-date with your fork at the origin remote: $ git status On branch YOUR_BRANCH Your branch is up-to-date with 'origin/YOUR_BRANCH'. nothing to commit, working tree clean Rebasing your branch¶. Once you’ve rebased, the status of your branch will have changed: $¶¶ For help with rebasing your PR, or other development related questions, join us on our #ansible-devel IRC chat channel on freenode.net. See also - The Ansible Development Cycle - Information on roadmaps, opening PRs, Ansibullbot, and more
https://docs.ansible.com/ansible/2.8/dev_guide/developing_rebasing.html
2021-01-15T18:20:23
CC-MAIN-2021-04
1610703495936.3
[]
docs.ansible.com
Getting Started¶ New Project¶ Click “File”->”New”, choose “en to zh” to create English to Chinese projects and “zh to en” to create Chinese to English projects. Choose other language pair to specify the source language and target language of the project. Translation Memory and Term Base will be created at the same time. You can also enter the required language code that complies with the ISO 639 standard by yourself. See the detailed info here. Save the project before further operations. Add File¶ When a project is opened, a list of item will show in the left area. You can manage project files, translation memory, term base, view statistics and preview. Right click “Project Files” to add files or add folders. Click filenames to open files. The interface will look like this when a file is opened. Every function area is marked out in the picture. Input your translation in the right textarea. After one segment’s translation is done, press “Enter” to go to the next one. Translation Memory¶ After you press “Enter” to finish one segment, the translation will be added to the translation memory. When translating a similar segment, it will appear in the lower area. Click that match to fill the translation into the textarea. Translation memory’s match rate can be set in Project Settings. The rate should be between 0.5 to 1.0. Add External Translation Memory¶ There are two types of translation memory in BasicCAT. One is project memory and one is external memory. Project memory stores memories created when translating the project’s files, while external memory show imported translation memory. Click “Project->Project Settings”, and in the “TM” page, you can manage external translation memory. You can import TMX files or tab-delimited txt files. For txt files, the source should be in the first row and the target should be in the second. A preview window will appear when you add a new file. When a segment from external translation memory is matched, a filename will show to indicate where that match comes from. The source text in the translation memory, the source text of the current segment and the target text in the translation memory will show in the differences display area. Terminology Management¶ When a sentence contains terms, you can select the corresponding texts in the source and the translation to add terms. BasicCAT uses opennlp to lemmatize words. So, if you add a term in its plural form, BasicCAT can detect its singular form in another segment. Right click on the term item to view more info and its history. Attention As an external term database may contains thousands of entries, BasicCAT uses a HashMap algorithm to match terms. Only the text in the source will be lemmatized. Terms in external termbases will not be lemmatized. So when adding a term, it is better to add in its original form. Importing terms is much the same as importing translation memory. TBX and tab-delimited txt files are supported. Term manager is also similar to TM manager. The difference lies in that tags and notes can be added for terms. Segments Manipulation¶ BasicCAT uses the SRX segmentation standard to segment the text. A segment can be a sentence or a phrase. Merge and Split Segments¶ If you come across a wrongly segmented name like below, you can move the cursor to the end and press “Delete” to merge the two segments. If two segments belong to different files or translation units, they cannot be merged. Different paragraphs in Word and different stories in InDesign belong to such case. BasicCAT hides format tags when possible. So if segments contain hidden tags, there will be a message box as below. You can choose to continue, and the merged source text may contains complex tags. When you need to split, like at the semicolon below, move the cursor to the semicolon and press “Enter”. Neglect Segment¶ When doing English to Chinese translation, it is common that the first segment and the second one have similar meanings. You can mark the first one as neglected and only translate the second one. When generating target files, these segments will be omitted. Use Menu Edit->Mark the current segment as neglected to do this. Textarea of neglected segments will be gray and not editable. Add notes¶ If you come across difficult sentences, you can make notes on how you get the translation done. Use Menu Edit->Show/Edit notes of the current segment to view or edit notes. Segments containing notes will have textarea with gray border. Statistics¶ Click “Statistics” in the project area, you can see the statistics like words number and percentage completed. Preview¶ Click “Preview” in the project area to preview the text. Translated source text will be replaced by translation.
https://docs.basiccat.org/en/latest/gettingstarted.html
2021-01-15T16:45:26
CC-MAIN-2021-04
1610703495936.3
[array(['_images/new_project.png', '_images/new_project.png'], dtype=object) array(['_images/select_languagepair.png', '_images/select_languagepair.png'], dtype=object) array(['_images/project_area.png', '_images/project_area.png'], dtype=object) array(['_images/add_file.png', '_images/add_file.png'], dtype=object) array(['_images/main_with_texts.png', '_images/main_with_texts.png'], dtype=object) array(['_images/fuzzy_match.png', '_images/fuzzy_match.png'], dtype=object) array(['_images/fuzzy_match_setting.png', '_images/fuzzy_match_setting.png'], dtype=object) array(['_images/project_setting_tm.png', '_images/project_setting_tm.png'], dtype=object) array(['_images/importpreview.png', '_images/importpreview.png'], dtype=object) array(['_images/match_result.png', '_images/match_result.png'], dtype=object) array(['_images/term_match.png', '_images/term_match.png'], dtype=object) array(['_images/term_more.png', '_images/term_more.png'], dtype=object) array(['_images/term_manager.png', '_images/term_manager.png'], dtype=object) array(['_images/merge_segments.png', '_images/merge_segments.png'], dtype=object) array(['_images/merge_segments_different_transunits.png', '_images/merge_segments_different_transunits.png'], dtype=object) array(['_images/merge_segments_hidden_tags.png', '_images/merge_segments_hidden_tags.png'], dtype=object) array(['_images/split_segments.png', '_images/split_segments.png'], dtype=object) array(['_images/mark_neglected_example.png', '_images/mark_neglected_example.png'], dtype=object) array(['_images/note_edit.png', '_images/note_edit.png'], dtype=object) array(['_images/segment_with_note.png', '_images/segment_with_note.png'], dtype=object) array(['_images/statistics.png', '_images/statistics.png'], dtype=object) array(['_images/preview.png', '_images/preview.png'], dtype=object)]
docs.basiccat.org
A charge sign can be assigned to the following type of groups: Generic Component Monomer Mer groups During group creation, you have the option to display the charge on the charged atom itself or the whole group. In the latter case, the charge is displayed outside of the bracket on the right. If any additional charges are added, the net charge will be calculated and displayed. The charge-bearing atom can be revealed by pointing the cursor over the group (in select mode). To replace the charge, select the group and navigate Structure > Group > Edit Group (or right-click the selected group, and select Edit Group ).
https://docs.chemaxon.com/display/lts-fermium/charge-of-the-group.md
2021-01-15T18:51:11
CC-MAIN-2021-04
1610703495936.3
[]
docs.chemaxon.com
Here is a list of changes to the API that we want you to be aware of well in advance as they may affect how you use Clarifai's platform. These changes include scheduled downtime and other improvements in stability, performance or functionality of the Clarifai platform in order to better serve you as a customer. Some of these changes may not be backward compatible and thus require you to update how you call our APIs. We created this page with the mindset of being as transparent as possible so you can plan any corresponding changes in advance and minimize any interruptions to your usage of Clarifai. The dates listed in the following tables are the date we plan to make the change. We may actually make the change in the days following the specified date. However, to be safe, your client-side code needs updating before that date to minimize any downtime to your applications. We will continue to update this page regularly, so a good way to always stay up to date is to watch our documentation repo on GitHub.
https://docs.clarifai.com/product-updates/upcoming-api-changes
2021-01-15T16:53:30
CC-MAIN-2021-04
1610703495936.3
[]
docs.clarifai.com
Investigations When unusual activity triggers an alert, Investigations are opened automatically. Investigations are an aggregate of the applicable alert data in a single place and are closely tied to Alerts and Threats. Investigations poll for updates in real-time, so any new alerts or notable behaviors will automatically show up on the Investigations timeline. InsightIDR allows you to start Investigations into<<
https://docs.rapid7.com/insightidr/investigations/
2021-01-15T17:27:51
CC-MAIN-2021-04
1610703495936.3
[array(['/areas/docs/_repos//product-documentation__master/b157b8886c548d94cd89fa31b5cbbad9e6d0c00d/insightidr/images/Screen Shot 2018-08-29 at 4.23.54 PM.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/b157b8886c548d94cd89fa31b5cbbad9e6d0c00d/insightidr/images/Screen Shot 2018-09-27 at 10.01.48 AM.png', None], dtype=object) ]
docs.rapid7.com
Tint Offset/Blend Dialog Box You can offset, blend, or mix the colours in a colour palette using the sliders and increasing the Amount value. To learn more about offsetting colours, see Cloning a Palette . - From the Colour View menu, select Palettes > Tint Panel or right-click and select Tint Panel. The Blend/Offset Tint panel opens.
https://docs.toonboom.com/help/harmony-17/essentials/reference/dialog-box/tint-offset-blend-dialog-box.html
2021-01-15T18:04:49
CC-MAIN-2021-04
1610703495936.3
[array(['../../Resources/Images/HAR/Trad_Anim/004_Colour/HAR11_tint_new_colours.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
System Registry¶ The purpose of the registry is to store key-value pairs of information. It can be considered an equivalent to the Windows registry (only not as complicated). You might use the registry to hold information that your script needs to store across sessions or requests.');
https://docs.typo3.org/m/typo3/reference-coreapi/10.4/en-us/ApiOverview/SystemRegistry/Index.html
2021-01-15T18:44:47
CC-MAIN-2021-04
1610703495936.3
[]
docs.typo3.org
Changelog for package pr2_app_manager 0.6.0 (2018-02-14) Merge pull request #15 from k-okada/remove_build_depend we do not need any package during build process we do not need any package during build process Contributors: Kei Okada 0.5.20 (2015-05-05) 0.5.19 (2015-04-29) added changelogs Contributors: TheDash 0.5.18 (2015-02-10) Merge branch 'hydro-devel' of into hydro-devel Updated maintainership fix install destination Contributors: Furushchev, TheDash 0.5.16 (2015-02-06) 0.5.15 (2015-01-28) 0.5.14 (2014-11-20) 0.5.13 (2014-11-18) 0.5.12 (2014-11-17) 0.5.11 (2014-10-20) 0.5.10 (2014-10-17) Removed include dependency Removed use_source_permissions Fixed pr2_app_manager install Changelogs Contributors: TheDash 0.5.9 (2014-10-01) Updated pr2_app_manager, now compiles in hydro Updated pr2_app_manager Added pr2_app_manager package Contributors: TheDash
http://docs.ros.org/en/kinetic/changelogs/pr2_app_manager/changelog.html
2021-01-15T18:31:49
CC-MAIN-2021-04
1610703495936.3
[]
docs.ros.org
How to Set Email Send Limits Last modified: September 28, 2020 Overview WHM allows you to specify the maximum number of emails that each domain on your server can send per hour. This allows you to limit spam and better regulate bandwidth that the domains on your server use.. - You cannot use the Max hourly emails per domain setting to disable email for an account or domain. - The system only enforces email send limits on remote email deliveries. Set an hourly limit for the domains of an account/usernamefile. Run the /usr/local/cpanel/scripts/updateuserdomainsscript. This script constructs the individual threshold files that Exim uses to determine whether the account has reached its maximum email limit. MAX_EMAIL_PER_HOURsetting in the /etc/email_send_limitsfile to any domain without a specific entry in the /var/cpanel/users/usernamefile. If the /etc/email_send_limitsfile does not exist, the system assigns the default value to the domain.
https://docs.cpanel.net/knowledge-base/email/how-to-set-email-send-limits/
2021-01-15T17:46:58
CC-MAIN-2021-04
1610703495936.3
[]
docs.cpanel.net
RxOrcData revoscalepy.RxOrcData(file: str = None, column_info=None, file_system: revoscalepy.datasource.RxFileSystem.RxFileSystem = None, write_factors_as_indexes: bool = False) Description Main generator for class RxOrcData, which extends RxSparkData. Arguments file Character string specifying the location of the data. e.g. “/tmp/AirlineDemoSmall.orc”. column_info List of named variable information lists. Each variable information list contains one or more of the named elements given below. Currently available properties for a column information list are: type: Character string specifying the data type for the column. Supported types are: ”bool” (stored as uchar), “integer” (stored as int32), “int16” (alternative to integer for smaller storage space), “float32” (stored as FloatType), “numeric” (stored as float64), “character” (stored as string), “factor” (stored as uint32), “Date” (stored as Date, i.e. float64.) levels: List of strings containing the levels when type = “factor”. If the levels property is not provided, factor levels will be determined by the values in the source column. If levels are provided, any value that does not match a provided level will be converted to a missing value. file_system Character string or RxFileSystem object indicating type of file system; It supports native HDFS and other HDFS compatible systems, e.g., Azure Blob and Azure Data Lake. Local file system is not supported. write_factors_as_indexes Bool value, if True, when writing to an RxOrcData data source, underlying factor indexes will be written instead of the string representations. Returns object of class RxOrcData. Example import os from revoscalepy import rx_data_step, RxOptions, RxOrcData sample_data_path = RxOptions.get_option("sampleDataDir") colInfo = {"DayOfWeek": {"type":"factor"}} ds = RxOrcData(os.path.join(sample_data_path, "AirlineDemoSmall.orc")) result = rx_data_step(ds)
https://docs.microsoft.com/ru-ru/machine-learning-server/python-reference/revoscalepy/rxorcdata
2021-01-15T18:56:59
CC-MAIN-2021-04
1610703495936.3
[]
docs.microsoft.com
Welcome to Japsa’s documentation!¶ Japsa is a free, open source JAva Package for Sequence Analysis. It contains a range of analysis tools that biologists and bioinformaticians would routinely use but may not be available elsewhere. It also provides a Java library to be incorporated in other Java projects. The package aims to be lightweight (fast and memory efficient) and to use the least possible dependencies. Its tools have a consistent command line interface and support reading from/writing to streams whenever possible. Japsa and its source code is licensed under the BSD license and is available on GitHub. Contents: - 1. Dependencies - 2. Installation - 3. Usage convention - 4. List of tools - 4.1. jsa.seq.stats: Show statistics of sequences - 4.2. jsa.seq.sort: Sort the sequences in a file - 4.3. jsa.seq.extract: Extract subsequences from a genome - 4.4. jsa.seq.split: Split multiple sequence file - 4.5. jsa.seq.join: Join multiple sequences into one file - 4.6. jsa.seq.annovcf: Annotate a vcf file - 4.7. jsa.seq.gff2fasta: Extract gene sequences - 4.8. jsa.seq.emalign Align two sequences using EM - 4.9. jsa.hts.countReads: Count reads from bam files - 4.10. jsa.hts.errorAnalysis: Error analysis of sequencing data - 4.11. jsa.hts.n50: Compute N50 of an assembly - 4.12. npReader: real-time conversion and analysis of Nanopore sequencing data - 4.13. jsa.np.filter: Filter sequencing data - 4.14. jsa.np.rtSpeciesTyping: Bacterial species typing with Oxford Nanopore sequencing - 4.15. jsa.np.rtMLST: Multi-locus Sequencing Typing in real-time with Nanopore sequencing - 4.16. jsa.np.rtStrainTyping: Bacterial strain typing with Oxford Nanopore sequencing - 4.17. jsa.np.rtResistGenes: Antibiotic resistance gene identification in real-time with Nanopore sequencing - 4.18. npScarf: real-time scaffolder using SPAdes contigs and Nanopore sequencing reads - 4.19. barcode: real-time de-multiplexing Nanopore reads from barcode sequencing - 4.20. jsa.util.streamServer: Receiving streaming data over a network - 4.21. jsa.util.streamClient: Streams data over a network - 4.22. XMas: Robust estimation of genetic distances with information theory - 4.23. jsa.phylo.normalise: Normalise branch length of a phylogeny - 4.24. capsim: Simulating the Dynamics of Targeted Capture Sequencing with CapSim - 4.25. Expert Model: tool for compression of genomic sequences - 5. Credits - 6. License
https://japsa.readthedocs.io/en/latest/index.html
2022-08-08T07:33:57
CC-MAIN-2022-33
1659882570767.11
[]
japsa.readthedocs.io
public final class LocalDate extends Object Note that ISO 8601 has a number of differences with the default gregorian calendar used in Java: This class implements these differences, so that year/month/day fields match exactly the ones in CQL string literals.) This method is not lenient, i.e. '2014-12-32' will not be treated as '2015-01-01', but instead throw an will remain without effect, as this class does not keep time components. See Calendar j
https://java-driver.docs.scylladb.com/scylla-3.10.2.x/api/com/datastax/driver/core/LocalDate.html
2022-08-08T08:24:47
CC-MAIN-2022-33
1659882570767.11
[]
java-driver.docs.scylladb.com
You're reading the documentation for a version of ROS 2 that has reached its EOL (end-of-life), and is no longer officially supported. If you want up-to-date information, please have a look at Humble. Configuring QoS at runtime, see ros2/design#280 Infrastructure and tools Parameters enforce type) Implement C client library rclc[**] Support more DDS / RTPS implementations: Connext 6, see ros2/rmw_connext#375 Connext dynamic [*] RTI’s micro implementation [*]. [***] Port of existing ROS 1 functionality Perception metapackage Perception PCL MoveIt MoveIt Maintainers are tracking: RQt convert more plugins [* each when dependencies are available] Reducing Technical Debt Extend testing and resolve bugs in the current code base Waitset inconsistency Multi-threading problems with components Fix flaky tests. Ability to run (all) unit tests with tools e.g. valgrind
https://docs.ros.org/en/eloquent/Feature-Ideas.html
2022-08-08T06:34:11
CC-MAIN-2022-33
1659882570767.11
[]
docs.ros.org
How to perform Sentiment Analysis on tweets¶ The following how-to reviews a Ryax workflow to do sentiment analysis on Tweets, and how to reproduce such an experiment. We will see how to package up a state of the art BERT language model in it’s own container, and connect this to two different external APIs (Twitter and Google) with very little user code required. First we will take a look at the example workflow and explain what it does and how. Then, we will go through each individual module and show how they may be easily extended for more functionality. Birds-Eye View¶ Let’s begin with a global view of the workflow. Here is a look from inside the Ryax workflow studio: What does this do?¶ From only the previous image it may be clear what this workflow does. Here’s the rundown: Trigger an execution every so often (defined by the user) with the “Emit Every Gateway”. Take the scored data and publish it all in a stuctured manner to a Google Sheet. Why is this interesting?¶ There are a few key points about this workflow that make it appealing from a business perspective: We can easily leverage two different external APIs in this workflow to get value out of data and into the hands of our team. Once running, no maintenance is needed unless functional changes are desired. The workflow can be scheduled to run as much as the user defines. In this way, once it’s set up, no more babysitting is needed and all the results can be accessed directly through the Google Sheet. As we will see, adapting this workflow to do different things is very easy, as the modules are not bound to eachother, and any additional data requirements can be added with a few lines of code. Now, let’s take a look at each module in a bit more detail. Emit Gateway¶ A ‘Gateway’ as we call it in Ryax, is any module that kickstarts a workflow. This particular gateway is called Emit Every, since it is designed to “emit” or launch an execution “every” so often. It has one input; a string to define a temporal quantity denoting how often it should wait to trigger a new execution. For example, the input 1d3h5m would trigger an execution every 1 day, 3 hours, and 5 minutes… More precisely every 1625 minutes. We can see these details from within the Ryax function store: Twitter Module¶ The first module in our workflow is designed to fetch some amount of tweets from the twitter API, according to a user-defined query. In the Ryax module store we can see more details about the I/O. As we can see, this module requires a few things to boot up: Number of tweets: how many tweets should we process at once? Query: this is a string which we use to search for tweets. If a hashtag, or a hashtag plus some language is desired, just use a #in the query. Credentials: To use this module, one must necessarily have a twitter account. With this, anyone can go to Developer-Twitter and get credentials to access the twitter API through their code. This input is a file with your credentials exactly as twitter gives them. In your workflow definition, give the relative path to where you have stored this credential file (in JSON) The module will return the path to a file where it has stored the retrieved data. Sentiment Analysis¶ We are developing several NLP modules at Ryax. This one is simple but very powerful. Let’s take a look in the studio: This module shows an NLP use case called Text Classification (sometimes called sentiment analysis). It takes segments of text as input, and will attempt to predict wether the text is saying something positive, or negative, along with a confidence score. This module does not need to work with tweets at all. As such it extends to all kinds of NLP use cases so long as the algorithm needed is sentiment analysis, and BERT (specifically distilbert fine tuned for text classification) is deemed appropriate for the task (as of October 2020 it most definitely is). You can check out the model as well as some of its siblings at Huggingface. If you’re familiar with python, it will be easy to replace the defualt model with another from the Huggingface library. The module takes as input a file with text data stored (hint, this was the output of the previous module), and outputs that same data by writing to either the same or a different file, labeled with a binary prediction, and a confidence score. In this case a prediction is two things: Prediction: this will be a binary value, either POSITIVEor NEGATIVE Confidence: the probabilistic confidence of that prediction (closer to 1 means more certain of the given prediction, while 0.5 is completely uncertain) Publish to Google Sheet¶ The last module in this workflow takes any data we send it, and as long as it is structurally sound, publishes it to a google sheet. Again, let’s have a look in the studio: Like with the twitter module, this module requires a little extra push to get moving. The inputs are very straightforward, but for this one you’ll need to generate a token for your google account. We will tell you how to do so right away and then move on to the module description. Here are the steps to get this module in production: Clone or download the code in this repository. This is the source code for the module. In a browser, go to this url. FOLLOW ONLY STEPS 1. This will allow you to use the google api. Please select to save the credentials file locally. PUT THIS FILE INTO THE SAME DIRECTORY AS THE CODE DOWNLOADED FROM STEP 1, it should be called credentials.json. The contents of this file will later be copied and pasted as input to the module. Open up a terminal and navigate to the directory where the code you downloaded from step 1 exists. Ensure you have python3 on your machine, and run the following command: pip3 install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib numpy. In that same terminal window, run the following command: python3 handler.py. This program will direct you online, where you will need to click allow to be able to update your google docs. NOTE: In some cases, you may get a warning that the site is unsafe, but to continue from here you must click proceed. This is due to the fact that you are using an app created by your google credentials that google itself is not verifying, trust us, it will be safe). Ensure the presence of the token.picklefile in the current working directory, alongside handler.pyand ryax_metadata.yaml(as well as other files perhaps, but these two are the only ones we are interested in). Build the module in Ryax! You can do so either by pushing the code you downloaded (with the added token file) to a repository of yours, to a new branch in our public repo, and then scanning it in the Ryax UI, or by using the CLI and uploading it directly from your file system. You did it! Here are the specific usage details for once the module is in Ryax: Both the credentials and the token file are inputs to this module. If you followed the steps above, then you can simply copy and paste the contents of the credentials.json file into the webUI credentials input parameter, and copy and paste the name of the token file with extension (typically just token.pickle unless you changed it) to the webUI token file input parameter (remember, the file must have been uploaded in the same directory as the handler.py when you built the module!) Then, you can either create a new google sheet or take the ID of an existing one to use as the spreadsheet ID input parameter. For this you’ll need to go to your google drive, and once a google sheet is created, take the ID from the URL. Below is an example of how to locate it in the URL: In the above picture, the highlighted portion of the URL is the ID that you will use as input! One special thing about this module is that it uses an internal python module from Ryax called ryax_google_agent, where we have and are continuing to develop google API methods. In this case, we have out own internal representation of a google sheet that can be manipulated. This makes it really easy for the user to have different interactions with the google API by changing as little code as possible. For example, would you like to overwrite the entire sheet every time this module is run? Ok fine, that’s a one word change, from an GoogleSpreadsheet.append() function call, to a GoogleSpreadsheet.update() call in the code. There you go! This module will perform a simple print in the logs showing return message from the google API when executed. You should see that google sheet update upon execution!
https://docs.ryax.tech/howto/text_classification_tutorial.html
2022-08-08T07:59:51
CC-MAIN-2022-33
1659882570767.11
[array(['../_images/twitter_wf.png', 'Twitter Sentiment Analysis WF'], dtype=object) array(['../_images/emitevery.png', 'Emit Every'], dtype=object) array(['../_images/twittermodule.png', 'Twitter Gateway'], dtype=object) array(['../_images/distlbert_text_clasif.png', 'NLP Module'], dtype=object) array(['../_images/publish_to_sheet.png', 'Publish to Google Sheet'], dtype=object) array(['../_images/google_sheet_id_url.png', 'Sheet ID'], dtype=object)]
docs.ryax.tech
myCNC controllers Axis B can be switched to independent pulse generator output. Independent Pulse Generator was added to firmware dated July 20, 2018. There are a number of global array registers to access to the independent pulse generator Originally the Pulse generator was meant to be used as Coolant control. Global register GVAR_PLC_COOLANT_STATE (#7372) is used to detect the Current State of the Pulse generator. If Generator Frequency register (#8133) is changed - If the RAW register (#8130) is written directly, the value will be sent to the Frequency Generator despite on Current Coolant state (#7372) Global variable registers can be written in either Hardware or Software PLC. Q: Why is the Frequency Ratio needed? A: Internal frequency units do not make sense for a normal user. It is convenient to set up the ratio and has the Frequency value in a unit that would be usable for a user. Depending on the Frequency generator application, the unit might be very different. It may be [1Hz] if you need a simple frequency generator, or [ml/hour] for Coolant control or [rpm] for Spindle speed through pulse-dir servo controller. The first application we used the Pulse Generator was a Coolant control base on a stepper driver.. main() { gvarset(60000,1);//run Servo ON procedure gvarset(8131, 8000); //set Frequency acceleration gvarset(8132, 1359); //set Ratio gvarset(8133, 0); //Off the Generator. exit(99); }; Function coolant_motor_start() is added to the mill-func.h file coolant_motor_start() { timer=10;do{timer--;}while(timer>0); gvarset(8131,1000000); //acceleration timer=10;do{timer--;}while(timer>0); x=gvarget(8133);//get the speed (frequency) k=gvarget(8132);//get the ratio x=x*k; //calculate the RAW frequency gvarset(8130,x); //send the raw frequency to the register timer=30;do{timer--;}while(timer>0); //wait a time for the frequency value to be delivered }; M08.plc procedure which starts the coolant motor would be the following (note the inclusion of mill-func.h at the beginning of the code): #include pins.h #include mill-func.h main() { gvarset(7372,1); portset(OUTPUT_FLOOD); // coolant_motor_start(); exit(99); //normal exit }; A procedure M09.plc to stop a coolant motor is simpler - we simply need to write “0” to the raw frequency register. #include pins.h main() { gvarset(7373,0); gvarset(7372,0); portclr(OUTPUT_FLOOD); portclr(OUTPUT_MIST); gvarset(8130,0); //stop the pulse generator timer=30;do{timer--;}while(timer>0); //wait a time for the frequency value to be delivered exit(99); //normal exit }; It is possible to control the spindle speed through the pulse generator. This is done through the independent pulse generator implemented in myCNC, which can be “mixed into” the B axis channel. An independent generator is controlled by writing values to the global variables 8130-8133, as described in the table at the beginning of this page (Register Name / Description table). When using the GUI elements (buttons, input lines, etc.) it is convenient (and necessary) to use the multiplier and frequency registers when setting the generator frequency (what you see is NOT what you get, as the multipliers convert the human-read values into real machine values). For example, when the operator changes the value of register #8133 (the preset generator frequency), myCNC software will automatically recalculate the value of this preset frequency while taking into account the preset multiplier and will send this data to the controller. When utilizing the Hardware PLC, you MUST use the “raw” value register entry (8130) and independently take into account the multiplier (in the PLC code), as no such helpful conversion is available. 1. Add the code that enables the generator into the Hardware PLC procedure M03.plc (spindle ON procedure). It is convenient to add code to the end of the procedure before the exit(99); line. // Set the acceleration of the generator gvarset (8131, 100000); timer = 30; do {timer -;} while (timer> 0); // Delay for 30ms // Convert spindle speed reference to frequency. // The value of the coefficient is selected in such a way as to convert // 12-bit spindle speed to generator frequency k = 123456; freq = eparam * k; // Calculate the raw value of the generator frequency // Send generator frequency value gvarset (8130, freq); timer = 30; do {timer -;} while (timer> 0); // Delay for 30ms exit (99); // normal exit 2. Add the following code to to enable the generator into the Hardware PLC spindle speed adjustment procedure (SPN.plc controls the speed with which the spindle is rotating). // Set the acceleration of the generator gvarset (8131, 100000); timer = 30; do {timer -;} while (timer> 0); // Delay for 30ms // Convert reference spindle speed to frequency. // The value of the coefficient k is selected in such a way as to convert // 12-bit spindle speed to generator frequency k = 123456; freq = eparam * k; // Calculate the raw value of the generator frequency by using the multiplier k // Send the generator frequency value gvarset (8130, freq); timer = 30; do {timer -;} while (timer> 0); // Delay for 30ms exit (99); // normal exit 3. Add the generator shutdown code to the Hardware PLC spindle shutdown procedure (M05.plc turns the spindle OFF). It is also convenient to add this code right at the end of the PLC procedure, before the exit(99); line. // Send generator frequency value gvarset(8130,0); timer = 30; do {timer -;} while (timer> 0); // Delay for 30ms exit (99); // normal exit In this implementation, the pulse-dir generation will be switched ON simultaneously with the classic control (a + 0-10V relay analog output). It is assumed that an unused spindle will be shut off physically by the operator and that the additional control signal will not affect operation. If the task is to connect both spindles at the same time and switch them during operation (for example, by referencing their tool number), it is necessary to organize a more complex PLC procedure, which will be checking the number of the tool, the value of the global variable or input controller and by this condition would include only one of the spindles. In this example, we are assuming that the speed of a conventional spindle is 24,000 rpm. This value, respectively, is registered as the maximum spindle speed in the settings (Settings > Config > Technology > Mill/Lathe > Spindle). At this spindle speed, a full 10V signal must be sent to the analog output, so the “voltage ratio” coefficient is set to “1” (in the case of, for example, a spindle with an input signal range of 0-5V, this coefficient would be 0.5 to get a 5V signal at maximum speed). When calling the PLC procedures for turning ON the spindle (M03.plc) and changing the spindle speed (SPN.plc), the spindle speed value is stored in the eparam variable. myCNC controllers have 12-bit registers for PWM and DAC at 0-10V.This means that with a maximum spindle speed of 24000 rpm and a factor of 1, the eparam variable will have a maximum value of 4095. Assume that the maximum servo spindle speed is 4,500 rpm. Then the eparam value at a speed of 4500 rpm will be: 4500 * (4095 ÷ 24000) = 768 The Pulse-Dir input of the servo spindle is set to 10,000 pulses, i.e. the motor shaft will make a full revolution every 10,000 pulses. Then, to achieve a full speed of 4500 rpm, the following pulse rate is required: 10000 * (4500 ÷ 60) = 750 000 The register RAW value for 750kHz (750,000Hz) will therefore be calculated as follows: 750000 ÷ 0.0014549 = 515499347 If the maximum speed corresponds to the eparam value of “768”, then the value of the coefficient to obtain “515499347” will be calculated as follows: 515499347 ÷ 768 = 671223 By setting these values in the M03.plc and SPN.plc procedures, we will generate the required 750 kHz frequency when the spindle speed is set to 4500, as well as smooth frequency control over the entire range from 0 to 4500 rpm. One unit of the generator acceleration is, by a very rough approximation, 1 impulse / s2. This means that with such an acceleration, the generator “accelerates” to a frequency of 1 Hz in 1 second. If, in our case, the maximum frequency is 750,000, then the acceleration must be equal to the same value in order to “accelerate” to this frequency in 1 second. //Turn on Spindle clockwise #include pins.h #include vars.h main() { command=PLC_MESSAGE_SPINDLE_SPEED_CHANGED; parameter=eparam; message=PLCCMD_REPLY_TO_MYCNC; timer=0;do{timer++;}while (timer<10);//pause to push the message with Spindle Speed data timer=0; proc=plc_proc_spindle; val=eparam; if (val>0xfff) {val=0xfff;}; if (val<0) {val=0;}; dac01=val; portclr(OUTPUT_CCW_SPINDLE); portset(OUTPUT_SPINDLE); gvarset(7370,1);//Spindle State timer=30;do{timer--;}while (timer>0); // gvarset(7371,eparam);//Spindle Speed Mirror register //gvarset(7372,0);//Mist State //gvarset(7373,0);//Flood State gvarset(8131, 500000); timer=30;do{timer--;}while(timer>0); //30ms delay k=671223; freq=val*k; //calculate the RAW frequency if (freq>515499348) {freq=515499348;}; gvarset(8130,freq); timer=30;do{timer--;}while(timer>0); //30ms delay //delay after the spindle was turned on timer=spindle_on_delay; do{timer--;}while (timer>0); //delay until the spindle reaches the given speed exit(99); //normal exit }; #include vars.h //set the Spindle Speed through DAC main() { val=eparam; dac01=val; //send the value to the DAC register //Change the Spindle State gvarset(7371,eparam); timer=30;do{timer--;}while (timer>0); //30ms delay s=gvarget(7370); if (s!=0) //if spindle should be ON { k=671223; freq=val*k; //calculate the RAW frequency using the multiplier if (freq>515499348) {freq=515499348;}; gvarset(8130,freq); timer=30;do{timer--;}while(timer>0); //30ms delay }; exit(99);//normal exit }; This is for records only. Users don't have to utilize these settings which can be altered only by having low-level access to the controller. Independent pulse output can be used for -
http://docs.pv-automation.com/mycnc/independent_pulse_generator
2022-08-08T06:26:50
CC-MAIN-2022-33
1659882570767.11
[]
docs.pv-automation.com
Feel++ Developer Manual This developer manual is for Feel++ version latest. An introduction to Feel++ programming which go through the main mathematical operations to solve a PDE and present basic programming aspects A mathematical concepts reference which go through the different mathematical concepts, functions and classes and provide the reference documentation - -
https://docs.feelpp.org/dev/latest/index.html
2022-08-08T07:07:35
CC-MAIN-2022-33
1659882570767.11
[]
docs.feelpp.org
Manage Mirrored Volumes Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 Manage mirrored volumes - Create and test a mirrored system or boot volume Add a mirror to an existing simple volume Break a mirrored volume into two volumes Remove a mirror from a mirrored volume Reconnect the disk and repair the mirrored volume Reactivate a mirrored volume Replace a failed mirror with a new mirror on another disk
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc784048(v=ws.10)?redirectedfrom=MSDN
2022-08-08T07:57:42
CC-MAIN-2022-33
1659882570767.11
[]
docs.microsoft.com
Date: Mon, 8 Aug 2022 06:24:46 +0000 (UTC) Message-ID: <1763462871.35705.1659939886957@ip-172-31-0-144.us-west-1.compute.internal> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_35704_170443641.1659939886943" ------=_Part_35704_170443641.1659939886943 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: The OpenFlow protocol is driven by ONF (Open Networking Foundation), a leader i= n software-defined networking (SDN). The OpenFlow protocol encompasses thre= e essential components of an SDN framework: Warning: On N3048EP-ON, N3048ET-ON and N3132PX switches, run "save_config" c= ommand s= upports in OpenFlow1.3.0 and OpenFlow 1.4.0 please see PicOS Support for OpenFlow 1.3.0 and= PicOS Support for O= penFlow 1.4.0. The following websites provide detailed information on Open vSwitch and = the OpenFlow protocol. PicOS can run in two different modes: In OVS mode, L2/L3 daemons are not running; the system is fully dedicate= d to Openflow and OVS. In L2/L3 mode, L2/L3 daemons are running, but OVS can also be activ= ated if CrossFlow is activated. This chapter assumes that the PicOS OVS mode is active. Please see = PICOS Mode Selection= a> to modify the PicOS mode. The N1148T-ON switch does not support OVS features.
https://docs.pica8.com/exportword?pageId=52205143
2022-08-08T06:24:47
CC-MAIN-2022-33
1659882570767.11
[]
docs.pica8.com
Schema File: schema.json or schema.yaml You can use this file to define a JSON schema for your component. When creating a new component via miyagi new <component>, the schema file is created with this content: { "$schema": "", "required": [], "properties": {} } Your mock data will be validated against your schema. The result will be rendered on the component view and logged in the console (if the validation failed).
https://docs.miyagi.dev/component-files/schema/
2022-08-08T08:07:29
CC-MAIN-2022-33
1659882570767.11
[]
docs.miyagi.dev
Creating a CMP for output to the admin panel Last updated Jun 23rd, 2019 | Page history | Improve this page | Report an issue In the last article I explained how you can create your component using MIGX. Now I will show how you can create and edit data in the admin panel. Who does not know what it is about, link to first article. In fact, creating your own page is essentially no different from creating the usual MIGX TV. Go to the tab MIGX: Fill: Name: electrica Add item replacement: Create string unique MIGX ID: electrica Then open the CMP-Settings tab and fill in: Then go to the tab MIGXdb-Settings and fill package (package name with XML markup) and Classname: Push Save. Then go to settings - Menu. Create found menu: In the parameters we write your customization of your component, as you called it. Well, that's all, we can now open it: We continue to display all our fields. Editing our MIGX configuration, adding a contextmenus: In the tab Columns fill in our fields: IMPORTANT!!! In the columns you need to create the id field, otherwise you will not be able to edit the data In the Formtabs tab, fill in our fields: That's all! Well, the output on the front has already been described in previous article. Create a snippet and make the selection or sampling we need. Well, or you can use snippet: [[!migxLoopCollection? &packageName=`electrica` &classname=`electricaItem` &tpl=`testTPL` ]] Chunk: <h1>[[+title]]</h1> <p>[[+description]]</p> And that's what we got:
https://docs.modx.org/current/en/extras/migx/migx.tutorials/creating-cmp
2022-08-08T08:32:50
CC-MAIN-2022-33
1659882570767.11
[array(['/2.x/en/extras/migx/migx.tutorials/creating-cmp/creating-cmp-1.png', None], dtype=object) array(['/2.x/en/extras/migx/migx.tutorials/creating-cmp/creating-cmp-2.png', None], dtype=object) array(['/2.x/en/extras/migx/migx.tutorials/creating-cmp/creating-cmp-3.png', None], dtype=object) array(['/2.x/en/extras/migx/migx.tutorials/creating-cmp/creating-cmp-4.png', None], dtype=object) array(['/2.x/en/extras/migx/migx.tutorials/creating-cmp/creating-cmp-5.png', None], dtype=object) array(['/2.x/en/extras/migx/migx.tutorials/creating-cmp/creating-cmp-6.png', None], dtype=object) array(['/2.x/en/extras/migx/migx.tutorials/creating-cmp/creating-cmp-7.png', None], dtype=object) array(['/2.x/en/extras/migx/migx.tutorials/creating-cmp/creating-cmp-8.png', None], dtype=object) array(['/2.x/en/extras/migx/migx.tutorials/creating-cmp/creating-cmp-9.png', None], dtype=object) array(['/2.x/en/extras/migx/migx.tutorials/creating-cmp/creating-cmp-10.png', None], dtype=object) ]
docs.modx.org
- 1 The automatic registration will only work within the same Python installation (this also includes virtual environments), so make sure to instruct your users to use the exact same Python installation for installing the plugin that they also used for installing & running OctoPrint. For OctoPi this means using ~/oprint/bin/pipfor installing plugins instead of just pip.. Version management after the official plugin repository release¶ Once your plugin is available in the official plugin repository, you probably want to create and distribute new versions. For “beta” users you can use the manual file distribution method, or a more elegant release channels (see below). After you finalized a new plugin version, don’t forget to actually update the version in the setup.py, and submit a new release on github. After you published the new release, you can verify it on your installed octoprint, with force checking the updates under the advanced options (in the software updates menu in the settings). The new versions will appear to the plugin users in the next 24 hours (it depends on their cache refreshes). The Software Update Plugin has options to define multiple release channels, and you can let the users decide if they want to test your pre-releases or not. This can be achieved with defining stable_branch and prerelease_branches in the get_update_information function, and creating github releases to the newly configured branches too. For more information you can check the Software Update Plugin documentation or read a more step-by-step writeup here.
https://docs.octoprint.org/en/master/plugins/distributing.html
2022-08-08T07:57:50
CC-MAIN-2022-33
1659882570767.11
[]
docs.octoprint.org
Click the Browse for license file button which will open file explorer. Navigate to your license file. Selecting a valid license file will show similar details to the corresponding management center image below. The logging section enables the user to configure the logging level as well as the log file location for the Connector Service. Please note that logging on the Connector Service is performed using the Microsoft Enterprise Library Logging Application Block. By default logging is configured as follows: The General section lets the user configure the general settings that will be applied to the Connector Service. Information service port: The port number used when behind a load-balancer to provide a service heart-beat.. Adaptor section manages the selection of the underlying chat system to which to connect and the infrastructure DNS servers that define the chosen platform. Server Name: Manually enter the FQDN of the Skype for Business front end or pool Users can connect to multiple persistent chat pools. This allows users to join any chatrooms that are located on any of the specified persistent chat pools.. Preferences: Sets the file repository for saving local preferences. Session timeout: This sets the timeout for MindLink Anywhere. The MindLink client will be set to an idle/away status after being disconnected from the network after the configured time has elapsed. You can add debug keys (such as configuring Exchange Online or enable pre-release features) and you can also override any other configuration value. Examples of a couple of custom setting keys include: Notes when using custom settings: Custom key/value Invalid keys cause the host to crash This section manages the MindLink Foundation API settings..
https://docs.mindlinksoft.com/docs/Install_And_Configure/Management_Center/API_Management_Center
2022-08-08T06:23:54
CC-MAIN-2022-33
1659882570767.11
[]
docs.mindlinksoft.com
Register a New Account To register a new account at Cebod Telecom portal follow the following steps: a. Visit and click on Register Now Fill in the form with your name, email, password (remember the password), phone number, and company name. Once the form is filled, click on Register. A confirmation page appears. b. You should’ve received an email in your inbox. If not, please check your spam folder. Please remember to allow (safelist) all emails from @cebodtelecom.com to prevent them from accidently going into your spam/junk folder. This will ensure that that you receive all the important communication from us in a timely manner. Below is a sample email notification. c. Click on the link to confirm your email. Another confirmation page appears, informing you that your account is now active. d. Log into your account with your email and that password you’ve selected. How To's - Previous Set up BLF Next - How To's Setup Call Forwarding Last modified 3yr ago Copy link
https://docs.cebodtelecom.com/overview/register-a-new-account
2022-08-08T06:34:16
CC-MAIN-2022-33
1659882570767.11
[]
docs.cebodtelecom.com
Hippo Hippo is an open source Platform as a Service (PaaS), making it easier to deploy and manage applications following modern cloud-native best practices. Hippo includes capabilities for building and deploying applications from source, simple application configuration, automatically deploying and rolling back releases, managing domain names, providing seamless edge routing, log aggregation, and sharing applications with other teams. All of this is exposed through a beautiful web interface and simple-to-use developer tooling. Under the hood, Hippo takes advantage of several modern cloud-native tools like WebAssembly to provide a safe, secure, sandboxed environment to compile, deploy, run, and manage applications. Our goal is to provide a platform for developers to take advantage of modern technologies without having to dive into the technical details of hosting. We are also focused on providing a platform for cloud engineers looking for a secure and safe runtime platform for their developers, with all the bells and whistles required to deploy applications with ease. Getting Started To get started with Hippo, follow our Quick Start Guide. Take a deep dive into Hippo in our Topic Guides. How-to Guides are recipes. They guide you through the steps involved in addressing key problems and use-cases. They are more advanced than tutorials and assume some knowledge of how Hippo works. The Developer Guides help you get started developing code for the Hippo project. Project Status Hippo is experimental code. It is not considered production-grade by its developers, nor is it “supported” software. However, it is ready for you to try out and provide feedback. About the Team DeisLabs is experimenting with many WebAssembly technologies right now. This is one of a multitude of projects (including Krustlet, the WebAssembly Kubelet) designed to test the limits of WebAssembly as a cloud-based runtime. Here at DeisLabs we are cooking up better ways to develop and run WebAssembly workloads. Not familiar with WebAssembly? Take a quick tour of WebAssembly in a Hurry to get up to speed.
https://docs.hippofactory.dev/
2022-08-08T07:40:44
CC-MAIN-2022-33
1659882570767.11
[]
docs.hippofactory.dev
NXP SJA1105 switch driver¶ Overview¶ The NXP SJA1105 is a family of 10 SPI-managed automotive switches:). Topology and loop detection through STP is supported. (‘flags. Routing actions (redirect, trap, drop)¶ The switch is able to offload flow-based redirection of packets to a set of destination ports specified by the user. Internally, this is implemented by making use of Virtual Links, a TTEthernet concept. The driver supports 2 types of keys for Virtual Links: VLAN-aware virtual links: these match on destination MAC address, VLAN ID and VLAN PCP. VLAN-unaware virtual links: these match on destination MAC address only. The VLAN awareness state of the bridge (vlan_filtering) cannot be changed while there are virtual link rules installed. Composing multiple actions inside the same rule is supported. When only routing actions are requested, the driver creates a “non-critical” virtual link. When the action list also contains tc-gate (more details below), the virtual link becomes “time-critical” (draws frame buffers from a reserved memory partition, etc). The 3 routing actions that are supported are “trap”, “drop” and “redirect”. Example 1: send frames received on swp2 with a DA of 42:be:24:9b:76:20 to the CPU and to swp3. This type of key (DA only) when the port’s VLAN awareness state is off: tc qdisc add dev swp2 clsact tc filter add dev swp2 ingress flower skip_sw dst_mac 42:be:24:9b:76:20 \ action mirred egress redirect dev swp3 \ action trap Example 2: drop frames received on swp2 with a DA of 42:be:24:9b:76:20, a VID of 100 and a PCP of 0: tc filter add dev swp2 ingress protocol 802.1Q flower skip_sw \ dst_mac 42:be:24:9b:76:20 vlan_id 100 vlan_prio 0 action drop Time-based ingress policing¶ The TTEthernet hardware abilities of the switch can be constrained to act similarly to the Per-Stream Filtering and Policing (PSFP) clause specified in IEEE 802.1Q-2018 (formerly 802.1Qci). This means it can be used to perform tight timing-based admission control for up to 1024 flows (identified by a tuple composed of destination MAC address, VLAN ID and VLAN PCP). Packets which are received outside their expected reception window are dropped. This capability can be managed through the offload of the tc-gate action. As routing actions are intrinsic to virtual links in TTEthernet (which performs explicit routing of time-critical traffic and does not leave that in the hands of the FDB, flooding etc), the tc-gate action may never appear alone when asking sja1105 to offload it. One (or more) redirect or trap actions must also follow along. Example: create a tc-taprio schedule that is phase-aligned with a tc-gate schedule (the clocks must be synchronized by a 1588 application stack, which is outside the scope of this document). No packet delivered by the sender will be dropped. Note that the reception window is larger than the transmission window (and much more so, in this example) to compensate for the packet propagation delay of the link (which can be determined by the 1588 application stack). Receiver (sja1105): tc qdisc add dev swp2 clsact now=$(phc_ctl /dev/ptp1 get | awk '/clock time is/ {print $5}') && \ sec=$(echo $now | awk -F. '{print $1}') && \ base_time="$(((sec + 2) * 1000000000))" && \ echo "base time ${base_time}" tc filter add dev swp2 ingress flower skip_sw \ dst_mac 42:be:24:9b:76:20 \ action gate base-time ${base_time} \ sched-entry OPEN 60000 -1 -1 \ sched-entry CLOSE 40000 -1 -1 \ action trap Sender: now=$(phc_ctl /dev/ptp0 get | awk '/clock time is/ {print $5}') && \ sec=$(echo $now | awk -F. '{print $1}') && \ base_time="$(((sec + 2) * 1000000000))" && \ echo "base time ${base_time}" tc qdisc add dev eno0 parent root taprio \ num_tc 8 \ map 0 1 2 3 4 5 6 7 \ queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \ base-time ${base_time} \ sched-entry S 01 50000 \ sched-entry S 00 50000 \ flags 2 The engine used to schedule the ingress gate operations is the same that the one used for the tc-taprio offload. Therefore, the restrictions regarding the fact that no two gate actions (either tc-gate or tc-taprio gates) may fire at the same time (during the same 200 ns slot) still apply. To come in handy, it is possible to share time-triggered virtual links across more than 1 ingress port, via flow blocks. In this case, the restriction of firing at the same time does not apply because there is a single schedule in the system, that of the shared virtual link: tc qdisc add dev swp2 ingress_block 1 clsact tc qdisc add dev swp3 ingress_block 1 clsact tc filter add block 1 flower skip_sw dst_mac 42:be:24:9b:76:20 \ action gate index 2 \ base-time 0 \ sched-entry OPEN 50000000 -1 -1 \ sched-entry CLOSE 50000000 -1 -1 \ action trap Hardware statistics for each flow are also available (“pkts” counts the number of dropped frames, which is a sum of frames dropped due to timing violations, lack of destination ports and MTU enforcement checks). Byte-level counters are not available. Limitations¶ The SJA1105 switch family always performs VLAN processing. When configured as VLAN-unaware, frames carry a different VLAN tag internally, depending on whether the port is standalone or under a VLAN-unaware bridge. The virtual link keys are always fixed at {MAC DA, VLAN ID, VLAN PCP}, but the driver asks for the VLAN ID and VLAN PCP when the port is under a VLAN-aware bridge. Otherwise, it fills in the VLAN ID and PCP automatically, based on whether the port is standalone or in a VLAN-unaware bridge, and accepts only “VLAN-unaware” tc-flower keys (MAC DA). The existing tc-flower keys that are offloaded using virtual links are no longer operational after one of the following happens: port was standalone and joins a bridge (VLAN-aware or VLAN-unaware) port is part of a bridge whose VLAN awareness state changes port was part of a bridge and becomes standalone port was standalone, but another port joins a VLAN-aware bridge and this changes the global VLAN awareness state of the bridge The driver cannot veto all these operations, and it cannot update/remove the existing tc-flower filters either. So for proper operation, the tc-flower filters should be installed only after the forwarding configuration of the port has been made, and removed by user space before making any changes to it. Device Tree bindings and board design¶ This section references Documentation/devicetree/bindings/net/dsa/nxp,sja1105.yaml¶ The SJA1105 port compatibility matrix is: The SJA1110 port compatibility matrix is:
https://docs.kernel.org/networking/dsa/sja1105.html
2022-08-08T07:02:17
CC-MAIN-2022-33
1659882570767.11
[]
docs.kernel.org
What is new in SQL Server Analysis Services 2016 RC0 With the release of SQL Server 2016 RC0, we’ve added more capability for modeling and managing Tabular models set at the SQL Server 2016 compatibility level (1200). RC0 brings us one step closer to general release by adding display folders, full-fledged PowerShell support for Tabular 1200 models and instances, SSIS administration of SSAS Tabular workloads, and a new Tabular Object Model namespace in AMO. Support for Display folders in Analysis Services Tables can contain hundreds of individual columns or measures. With SQL Server 2016 RC0, modelers can organize them into user-defined folders to view and manage these attributes. These folders are called display folders and are also supported by Excel and Power BI Desktop. You can now set the display folder on any measure or column in SSDT, using the latest SQL Server Data Tools Preview in Visual Studio 2015 release that coincides with the RC0 drop. Display folders show up in Excel’s field list: And in the Power BI desktop field list, when you use the February update: For this RC, there is a known issue that the display folder won’t show up immediately in Excel. Please see the release notes for more details and a workaround. PowerShell support for SQL Server 2016 Tabular models With RC0, you can now use all the Analysis Services PowerShell cmdlets against a Tabular model set to the SQL Server 2016 compatibility level (1200). We also introduced two new cmdlets: Invoke-ProcessASDatabase and Invoke-ProcessTable . Let’s see how we can use these PowerShell improvements when connecting to a Tabular model. To start, I run PowerShell and then enter the following command that loads the SQL PowerShell environment: “Import-Module sqlps -DisableNameChecking” With this loaded, you can now connect to your SQL Server 2016 Analysis Services tabular instance. First, switch to the SQLAS provider: cd SQLAS\ And then connect to the server: cd [SERVERNAME] After connecting to the server, you can navigate Tabular objects. We will automatically detect if the Tabular model is at the 1200 compatibility level and then show a tabular representation: Tabular objects can now be used in the many cmdlets available for SSAS. You can find the complete list here in the documentation. SSIS support for SQL Server 2016 Tabular models With RC0 you can now use all the SSIS task and destinations against a SQL Server 2016 Tabular model. SSIS tasks have been updated to represent tabular objects instead of multidimensional objects. For example, with the latest tools, if you want to select objects to process, the Processing Task will automatically detect if the model is a Tabular 1200 compatibility model and give us Tabular objects rather than measuregroups and dimensions: The same goes for the Partition processing destination, as it now also shows Tabular objects and also supports pushing data into a partition. The Dimension processing destination will not work for Tabular models with the SQL 2016 compatibility level as the Processing task or Partition task are sufficient to schedule tabular processing. The Execute DDL task was already updated in CTP 3.3 to receive TMSL scripts. Tabular Object model for SQL Server 2016 Tabular models The new Tabular Object Model (TOM) is part of Analysis Management Objects (AMO), the complete library of programmatically accessed objects that enables an application to manage a running instance of Microsoft SQL Server Analysis Services. With the Tabular Object Model you can now use a concepts familiar to the Tabular developer instead of using Multidimensional concepts. This allows for simpler and more readable code when developing against a Tabular 1200 model. A high-level picture of the new TOM API looks like this: Here is a small code snippet that shows how to refresh a single table: public void RefreshTable() { var server = new Server(); server.Connect(ServerConnectionString); //Connect to the server. Database Db = server.Databases["TMDB"]; //Connect to the DB Model m = Db.Model; //Get the model m.Tables["Sales"].RequestRefresh (RefreshType.Full); //Mark the Sales table to be refreshed. m.SaveChanges(); //Commit the changes } You can find more information in the documentation. Content is mostly reference docs right now, with more details to come over the next several weeks. To download TOM you can download "SQL_AS_AMO.msi" from the SQL 2016 RC0 feature pack here. Download now! To get started, download SQL Server 2016 RC0. The corresponding tools, SSDT February 2016 for Visual Studio 2015 are also available for download. If you already have an earlier version of SSDT, you can install the latest version over it.
https://docs.microsoft.com/en-us/archive/blogs/analysisservices/what-is-new-in-sql-server-analysis-services-2016-rc0
2022-08-08T06:41:27
CC-MAIN-2022-33
1659882570767.11
[array(['https://msdntnarchive.blob.core.windows.net/media/2016/03/image001.png', 'image001'], dtype=object) ]
docs.microsoft.com
receipt other than check, or Apple might reject your submission.
https://docs.unity3d.com/2021.1/Documentation/ScriptReference/PlayerSettings-useMacAppStoreValidation.html
2022-08-08T07:12:33
CC-MAIN-2022-33
1659882570767.11
[]
docs.unity3d.com
Apache Overview Alliance Auth gets served using a Web Server Gateway Interface (WSGI) script. This script passes web requests to Alliance Auth which generates the content to be displayed and returns it. This means very little has to be configured in Apache to host Alliance Auth. If you’re using a small VPS to host services with very limited memory, consider using NGINX. Installation Ubuntu 1804, 2004: apt-get install apache2 CentOS 7: yum install httpd Centos Stream 8, Stream 9 dnf install httpd CentOS 7, Stream 8, Stream 9 systemctl enable httpd systemctl start httpd Configuration Apache needs to be able to read the folder containing your auth project’s static files. Ubuntu 1804, 2004: chown -R www-data:www-data /var/www/myauth/static CentOS 7, Stream 8, Stream 9 chown -R apache:apache /var/www/myauth/static Apache serves sites through defined virtual hosts. These are located in /etc/apache2/sites-available/ on Ubuntu and /etc/httpd/conf.d/httpd.conf on CentOS. A virtual host for auth need only proxy requests to your WSGI server (Gunicorn if you followed the install guide) and serve static files. Examples can be found below. Create your config in its own file e.g. myauth.conf Ubuntu To proxy and modify headers a few mods need to be enabled. a2enmod proxy a2enmod proxy_http a2enmod headers Create a new config file for auth e.g. /etc/apache2/sites-available/myauth.conf and fill out the virtual host configuration. To enable your config use a2ensite myauth.conf and then reload apache with service apache2 reload. Warning - In some scenarios, the Apache default page is still enabled. To disable it use:: a2dissite 000-default.conf Sample Config File <VirtualHost *:80> ServerName auth.example.com ProxyPassMatch ^/static ! ProxyPassMatch ^/robots.txt ! ProxyPass / ProxyPassReverse / ProxyPreserveHost On Alias "/static" "/var/www/myauth/static" Alias "/robots.txt" "/var/www/myauth/static/robots.txt" <Directory "/var/www/myauth/static"> Require all granted </Directory> <Location "/robots.txt"> SetHandler None Require all granted </Location> </VirtualHost> SSL It’s 2018 - there’s no reason to run a site without SSL. The EFF provides free, renewable SSL certificates with an automated installer. Visit their website for information. After acquiring SSL the config file needs to be adjusted. Add the following lines inside the <VirtualHost> block: RequestHeader set X-FORWARDED-PROTOCOL https RequestHeader set X-FORWARDED-SSL On Known Issues Apache2 vs. Django For some versions of Apache2 you might have to tell the Django framework explicitly to use SSL, since the automatic detection doesn’t work. SSL in general will work, but internally created URLs by Django might still be prefixed with just http:// instead of https://, so it can’t hurt to add these lines to myauth/myauth/settings/local.py. # Setup support for proxy headers USE_X_FORWARDED_HOST = True SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTOCOL", "https")
https://allianceauth.readthedocs.io/en/latest/installation/apache.html
2022-08-08T07:49:48
CC-MAIN-2022-33
1659882570767.11
[]
allianceauth.readthedocs.io
Chrononaut’s API¶ Core library classes¶ - class chrononaut. Versioned¶ A mixin for use with Flask-SQLAlchemy declarative models. To get started, simply add the Versionedmixin to one of your models: class User(db.Model, Versioned): __tablename__ = 'appuser' id = db.Column(db.Integer, primary_key=True) email = db.Column(db.String(255)) ... The above will then automatically track updates to the Usermodel and create an appuser_historytable for tracking prior versions of each record. By default, all columns are tracked. By default, change information includes a user_idand remote_addr, which are set to automatically populate from Flask-Login’s current_userin the _capture_change_info()method. Subclass Versionedand override a combination of _capture_change_info(), _fetch_current_user_id(), and _get_custom_change_info(). This change_infois stored in a JSON column in your application’s database and has the following rough layout: { "user_id": "A unique user ID (string) or None", "remote_addr": "The user IP (string) or None", "extra": { ... # Optional extra fields }, "hidden_cols_changed": [ ... # A list of any hidden fields changed in the version ] } Note that the latter two keys will not exist if they would otherwise be empty. You may provide a list of column names that you do not want to track using the optional __chrononaut_untracked__field or you may provide a list of columns you’d like to “hide” (i.e., track updates to the columns but not their values) using the __chrononaut_hidden__field. This can be useful for sensitive values, e.g., passwords, which you do not want to retain indefinitely. diff(from_model, to=None, include_hidden=False)¶ Enumerate the changes from a prior history model to a later history model or the current model’s state (if tois None). - class chrononaut. VersionedSQLAlchemy(app=None, use_native_unicode=True, session_options=None, metadata=None, query_class=<class 'flask_sqlalchemy.BaseQuery'>, model_class=<class 'flask_sqlalchemy.model.Model'>, engine_options=None)¶ A subclass of the SQLAlchemyused to control a SQLAlchemy integration to a Flask application. Two usage modes are supported (as in Flask-SQLAlchemy). One is directly binding to a Flask application: app = Flask(__name__) db = VersionedSQLAlchemy(app) The other is by creating the dbobject and then later initializing it for the application: db = VersionedSQLAlchemy() # Later/elsewhere def configure_app(): app = Flask(__name__) db.init_app(app) return app At its core, the VersionedSQLAlchemyclass simply ensures that database sessionobjects properly listen to events and create version records for models with the Versionedmixin. Helper functions¶ chrononaut. extra_change_info(*args, **kwds)¶ A context manager for appending extra change_infointo Chrononaut history records for Versionedmodels. Supports appending changes to multiple individual objects of the same or varied classes. Usage: with extra_change_info(change_rationale='User request'): user.email = '[email protected]' letter.subject = 'Welcome New User!' db.session.commit() Note that the db.session.commit()change needs to occur within the context manager block for additional fields to get injected into the history table change_infoJSON within an extrainfo field. Any number of keyword arguments with string values are supported. The above example yields a change_infolike the following: { "user_id": "[email protected]", "remote_addr": "127.0.0.1", "extra": { "change_rationale": "User request" } } chrononaut. append_change_info(*args, **kwds)¶ A context manager for appending extra changeinfo directly onto a single model instance. Use extra_change_info()for tracking multiple objects of the same or different classes. Usage: with append_change_info(user, change_rationale='User request'): user.email = '[email protected]' db.session.commit() Note that db.session.commit()does not need to occur within the context manager block for additional fields to be appended. Changes take the same form as with extra_change_info(). chrononaut. rationale(*args, **kwds)¶ A simplified version of the extra_change_info()context manager that accepts only a rationale string and stores it in the extra change info. Usage: with rationale('Updating per user request, see GH #1732'): user.email = '[email protected]' db.session.commit() This would yield a change_infolike the following: { "user_id": "[email protected]", "remote_addr": "127.0.0.1", "extra": { "rationale": "Updating per user request, see GH #1732" } }
https://chrononaut.readthedocs.io/en/latest/basics.html
2022-08-08T07:11:44
CC-MAIN-2022-33
1659882570767.11
[]
chrononaut.readthedocs.io
public interface Index String getAttributeName() boolean isOrdered() Ordered indexes support the fast evaluation of range queries. Unordered indexes are still capable to execute range queries, but the performance would be about the same as the full scan performance. trueif this index is ordered, falseotherwise. getSubRecords(com.hazelcast.query.impl.ComparisonType, java.lang.Comparable), getSubRecordsBetween(java.lang.Comparable, java.lang.Comparable) void saveEntryIndex(QueryableEntry entry, Object oldValue) entry- the entry to save. oldValue- the previous old value associated with the entry or nullif the entry is new. QueryException- if there were errors while extracting the attribute value from the entry. void removeEntryIndex(Data key, Object value) key- the key of the entry to remove. value- the value of the entry to remove. QueryException- if there were errors while extracting the attribute value from the entry. TypeConverter getConverter() nullif the converter is not known because there were no saves to this index and the attribute type is not inferred yet. Set<QueryableEntry> getRecords(Comparable value) value- the value to compare against. Set<QueryableEntry> getRecords(Comparable[] values) values- the values to compare against. Set<QueryableEntry> getSubRecordsBetween(Comparable from, Comparable to) More precisely, this method produces a result set containing entries whose attribute values are greater than or equal to the given from value and less than or equal to the given to value. from- the beginning of the range (inclusive). to- the end of the range (inclusive). Set<QueryableEntry> getSubRecords(ComparisonType comparisonType, Comparable searchedValue) comparisonType- the type of the comparison to perform. searchedValue- the value to compare against. void clear() void destroy()
https://docs.hazelcast.org/docs/3.10/javadoc/com/hazelcast/query/impl/Index.html
2022-08-08T07:34:00
CC-MAIN-2022-33
1659882570767.11
[]
docs.hazelcast.org
Set-SPContent Database Sets global properties of a SharePoint content database. Syntax Set-SPContent Database [ mirror server for failover. Specifies the content database to update.. Specifies the maximum number of site collections that this database can host. The type must be a positive integer. Set to $null to clear this value. Specifies the status of the SQL Server database. Set this parameter to Online to make the database available to host new sites. Set this parameter to Disabled to make the database unavailable to host new sites. The type must be either of the following: Online or Disabled Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/Set-SPContentDatabase?redirectedfrom=MSDN&view=sharepoint-server-ps
2022-08-08T09:01:37
CC-MAIN-2022-33
1659882570767.11
[]
docs.microsoft.com
Next: System Utilities, Previous: Object Oriented Programming , toolbars, context menus, pushbuttons, sliders, etc..
https://docs.octave.org/interpreter/GUI-Development.html
2022-08-08T07:32:49
CC-MAIN-2022-33
1659882570767.11
[]
docs.octave.org
Contacts API - Recipients Elements that can be shared among more than one endpoint definition. Search recipients POST /v3/contactdb/recipients/search Base url: Search using segment conditions without actually creating a segment. Body contains a JSON object with conditions, a list of conditions as described below, and an optional list_id, which is a valid list ID for a list to limit the search on. Valid operators for create and update depend on the type of the field for which you are searching. - Dates: - "eq", "ne", "lt" (before), "gt" (after) - You may use MM/DD/YYYY for day granularity or an epoch for second granularity. - "empty", "not_empty" - "is within" - You may use an ISO 8601 date format or the # of days. - Text: "contains", "eq" (is - matches the full field), "ne" (is not - matches any field where the entire field is not the condition value), "empty", "not_empty" - Numbers: "eq", "lt", "gt", "empty", "not_empty" Field values must all be a string. Search conditions using "eq" or "ne" for email clicks and opens should provide a "field" of either clicks.campaign_identifier or opens.campaign_identifier. The condition value should be a string containing the id of a completed campaign. Search conditions list may contain multiple conditions, joined by an "and" or "or" in the "and_or" field. The first condition in the conditions list must have an empty "and_or", and subsequent conditions must all specify an "and_or". Authentication - API Key Headers Request Body The conditions by which this segment should be created. { "list_id": -27497588, "conditions": [ { "and_or": "", "field": "birthday", "value": "01/12/1985", "operator": "eq" }, { "and_or": "", "field": "birthday", "value": "01/12/1985", "operator": "eq" }, { "and_or": "", "field": "birthday", "value": "01/12/1985", "operator": "eq" }, { "and_or": "", "field": "birthday", "value": "01/12/1985", "operator": "eq" } ] }.
https://docs.sendgrid.com/api-reference/contacts-api-recipients/search-recipients?utm_source=docs&utm_medium=social&utm_campaign=guides_tags
2022-08-08T07:07:15
CC-MAIN-2022-33
1659882570767.11
[]
docs.sendgrid.com
SendGrid has helped thousands of customers send their email messages since 2009. We help our customers build their email content, send their messages, and view the success of each campaign sent. We also realize that the actual journey of an email message sent to an inbox is complicated. Sometimes this process may not be fully understood by all senders. This post shows the basics of the email path, along with where SendGrid is helping to make that journey less complicated. In the email flowchart below, you can see the main components that all email messages pass through. Granted, there are many other finer details involved within each step, but for the sake of this post, we’re keeping it to the basics. First, a sender puts together the content that their recipients will love. Then it’s time for the “SMTP conversation” to take place. SMTP stands for Simple Mail Transfer Protocol, and this conversation is what makes email messages get from the sender to the recipient. It’s easiest to think of an SMTP conversation as a “handshake”. Imagine that a sender is a host at a party and all of the other guests are the recipients of the message. The host will shake every guest’s hand and during that “handshake” they will have this SMTP conversation. In the end, the guest (i.e. recipient and its recipient server) will determine if they will accept the message or not. In this scenario, you can think of SendGrid as a person at the party grabbing both the host’s and guest’s hands and making the handshake and discussion actually happen. The “Handshake” Details and Results The sender connects to the SMTP server through SendGrid and tells the server the final destination it would like its message to go to. Let’s say it is “[email protected]”. The SMTP server recognizes the domain portion (the part after the @ sign) of “example 2 places SendGrid assists in the message path (along with the Outbound Mail Server) are the DNS (Domain Name System) and Authentication portions. The receiving server wants to trust the mail that is being exchanged in order to accept it. DNS and Authentication assists with this decision. DNS DNS stands for “Domain Name System” and it is thought of as the “phone book for the Internet”. It houses many pieces of information for the sending domain of a message. The receiving server checks this “phone book” to see if it can determine who the sender is and if they are trusted. Authentication The receiving server will check: *. whether or not marked as spam). * Where the receiving server previously decided to place any mail from the same IP and domain. The reputation of the domains included in the links within the body content will also factor into delivery. User Level Filtering Along with these items listed above, some recipients may also have their own individual rules within their inbox of where certain mail will go. This placement is harder to change, aside from making sure that your content is desired by the recipient and they won’t be creating any custom filters to have your messages delivered anywhere but the inbox. Reacting to Opinions of the “Guests” Feedback for the Guests to Give the Host Within an email, there is a function known as a Feedback Loop. Feedback Loops are created by the mailbox providers and a sender can get set up to receive notifications through them to inform them when a recipient complains about the sender’s message (aka marking a message as junk or spam). This should help the host (sender) to be aware of when certain guests didn’t want the content included in their interaction. The host (sender) should not try to have another conversation with (aka send messages to) these guests in the future. Get additional onboarding support. Save time, increase the quality of your sending, and feel confident you are set up for long-term success with SendGrid Onboarding.
https://docs.sendgrid.com/ui/sending-email/email-flow?utm_source=docs&utm_medium=social&utm_campaign=guides_tags
2022-08-08T07:00:44
CC-MAIN-2022-33
1659882570767.11
[array(['https://twilio-cms-prod.s3.amazonaws.com/original_images/MailFlow.png', 'Email Flow'], dtype=object) ]
docs.sendgrid.com
public class ReadFailureException extends QueryConsistencyException This happens when some of the replicas that were contacted by the coordinator replied with an error.FailureException(@NonNull Node coordinator, @NonNull ConsistencyLevel consistencyLevel, int received, int blockFor, int numFailures, boolean dataPresent, @NonNull Map<InetAddress,Integer> reasonMap) public int getNumFailures() public boolean wasDataPresent() During reads, Cassandra doesn't request data from every replica to minimize internal network traffic. Instead, some replicas are only asked for a checksum of the data. A read failure may occur even if enough replicas have responded to fulfill the consistency level, if only checksum responses have been received. This method allows to detect that case. @NonNull public Map<InetAddress,Integer> getReasonMap() At the time of writing, the existing reason codes are: 0x0000: the error does not have a specific code assigned yet, or the cause is unknown. 0x0001: The read operation scanned too many tombstones (as defined by tombstone_failure_thresholdin cassandra.yaml, causing a TombstoneOverwhelmingException. This feature is available for protocol v5 or above only. With lower protocol versions, the map will always be empty.
https://java-driver.docs.scylladb.com/scylla-4.13.0.x/api/com/datastax/oss/driver/api/core/servererrors/ReadFailureException.html
2022-08-08T06:45:59
CC-MAIN-2022-33
1659882570767.11
[]
java-driver.docs.scylladb.com
About Phoenix for NAS shares Phoenix offers you a viable, cost-effective way of storing, managing, archiving, and recovering your data on NAS devices. The key highlights of the Phoenix support for NAS devices are as follows: - Scale-out NAS agent that allows parallel and performant backups for large NAS deployments. Smart scan approach, including machine learning through metadata analysis that identifies data that has changed for backup versus repeatedly backing up all data regardless if it has changed or not. By focusing on backing up only the new or changed data, backup cycles are dramatically shortened. Native data format and vendor agnostic approach that enables device migration and data intelligence use cases. Phoenix components The Phoenix components of NAS shares are: - Phoenix Cloud: This is the server component of Phoenix that authenticates and authorizes incoming backup and restore requests from the NAS proxy. The NAS proxy is installed on a separate Windows or Linux server and redirects the requests to the Phoenix storage. - Phoenix Management Console: The Phoenix Management Console is a web-based, unified console that provides complete visibility and understanding of the health status of the NAS devices and its shares that you manage, wherever those NAS devices reside. You can globally view all of the NAS devices located in your storage infrastructure and configure NAS shares for backup, recovery, and archival of the data. The console provides Phoenix administrators with an ability to: - Register and configure NAS devices and shares for backup in the server infrastructure of the organization. - Control Phoenix activities by defining backup content, backup policy, retention period, and more. - Monitor backup and recovery jobs, activities, and reports. - NAS proxy: The NAS proxy is the Phoenix Agent installed on a Windows or Linux server, which handles the backup and restore requests from the NAS shares. You need to install and activate the NAS proxy to establish its connectivity with Phoenix. The link between Phoenix and a NAS device is established when you map an activated NAS proxy to a NAS device. You can map multiple proxies to a device and can also attach it to multiple backup sets. - Phoenix CloudCache: The Phoenix CloudCache is a dedicated server that temporarily stores backup and restore data from the NAS shares. At periodic intervals, Phoenix CloudCache synchronizes this data to the Phoenix Cloud and reduces the bandwidth consumption within your infrastructure. Backup capabilities Phoenix provides the following backup capabilities for NAS shares: Manual backup During manual backups, Phoenix only backs up data as defined in the backup policy. The backup window defined in the backup policy does not restrict a manual backup. Instead, it continues until all the data is backed up. In the event of a network connection failure during backup, the NAS proxy attempts to connect to the Phoenix Cloud. After the connectivity is restored, backup resumes from the state in which it was interrupted. Network bandwidth does not restrict a manual backup, as it uses the maximum bandwidth available. Note that the Automatic Retry feature does not work with manual backups. Back up empty folders Phoenix can back up and restore empty folders. On SMB shares, the USN journal must be enabled to back up and restore an empty folder. Smart scan The Smart scan feature optimizes scan duration at the time of executing a backup job. Phoenix can save a lot of time if certain files and folders, or Access Control Lists are not scanned at the time of backup. The Smart scan feature provides options that let you choose what should be skipped when Phoenix scans files and folders for backup. After you enable Smart scan: - You can choose to skip Access Control Lists (ACLs) from getting scanned. - You can choose to not to scan folders that are not modified for a specified period for backup, for example, three months. By default, the Smart scan option is disabled. You can enable this option while creating or editing a backup policy. For more information, see: Manage backup policies. Note: - The Skip ACL scan for unmodified files option is not applicable for Linux servers. - Smart Scan is supported only for SMB shares. - First backup after a change in the content specified in the backup policy will be a full scan. The subsequent scans will be as per configuration. File types supported for backup While configuring a new backup set, you can configure the Exclude file types and Include file types options. The file types that are excluded by default are video files, audio files, executables, and image files. The file types that are included by default are Office files, PDF files, and HTML files. You can change the default selection of the file types. The following table lists the file types that you can include and exclude from the backup policy. Folders excluded from NAS share backup By default, Phoenix excludes certain folders from the NAS share backup, since both Windows and Linux operating systems contain system-specific files, which can be excluded from backup. For example, the Recycle Bin on the Windows server contains deleted files and folders, which won't require backup. The following table lists the default folders that Phoenix excludes from the backup.
https://docs.druva.com/Phoenix/030_Configure_Phoenix_for_Backup/040_Back_up_and_restore_NAS_shares/010_Introduction_to_Phoenix_for_NAS_shares/10About_Phoenix_for_NAS_devices
2019-08-17T11:28:09
CC-MAIN-2019-35
1566027312128.3
[array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/cross.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) ]
docs.druva.com
Streaming in Mule Apps Mule 4 introduces a new framework to work with streamed data. To understand the changes introduced in Mule 4, it is necessary to understand how traditional data streams are consumed: Data streams cannot be consumed more than once. For example, consider the following flow: In a Mule 3 app, this flow results in writing the first file correctly, while the second is created with empty content. This happens because each component that consumes a stream expects to receive a new stream. After the stream is consumed by the first Write operation, the second Write operation receives an empty stream, so is has no content to write to a file. Something similar happened when trying to log the payload before and after a DataWeave transformation. Consider this example: This app logs the payload before the Transform Message processor, but does not log the resulting payload after this because the Logger consumes the stream, loading it into memory. When the stream gets to the Transform Message processor, the stream content is available in the memory, so it is possible for the Transform Message processor to consume it. However, after this, the second Logger receives an empty stream. Data Streams cannot be consumed at the same time. Consider a Mule 3 app that uses a Scatter-Gather router to split a data stream and simultaneously log and write the payload to a file. This app fails because your streamed content cannot be processed by different processor chains simultaneously. Repeatable Streams Mule 4.0 introduces Repeatable Streams as its default framework for handling streams. Repeatable Streams enable you to: Read a stream more than once. Have concurrent access to the stream. As a component consumes the stream, Mule saves its content into a temporary buffer. The runtime then feeds the component from the temporary buffer, ensuring that each component receives the full stream, regardless of how much of the stream was already consumed by any prior component. This happens automatically and requires no special configurations from your end, which saves you the trouble of finding workarounds to save the stream somewhere so you can access it again. This configuration automatically fixes the first two Mule 3 examples outlined above. All repeatable streams support parallel access. This means that you don’t need to worry about whether two components are trying to read the same stream when each component is running on a different thread. Mule automatically makes sure that when component A reads the stream it doesn’t generate any side effects in component B. This enables you to perform tasks like the one described in the third example above. You can configure how Mule handles the repeatable stream by using streaming strategies. Streaming Strategies File Stored Repeatable Stream This is the default streaming strategy in Mule Runtime Enterprise Edition. It initially uses an in-memory buffer size of 512 KB. If the stream is larger than that, it creates a temporary file on your disk to store the contents without overflowing your memory. When you know you need to deal with large or small files, you can change the buffer size to optimize performance. Configuring a bigger buffer size increases performance by avoiding the number of times the runtime needs to write the buffer to your disk, but it also limits the number of concurrent requests your application can process. The same way, configuring a smaller buffer size saves you memory load. You can even set the buffer’s unit of measurement, so you don’t have to go through unit conversions. For example, if you know that you are going to read a file that’s always around 1 MB size, you can configure a 1 MB buffer: <file:read <repeatable-file-store-stream </file:read> Or if you know you are always processing a file no bigger than 10 KB, you can save memory: <file:read <repeatable-file-store-stream </file:read> Based on performance test, the default 512 KB buffer size configuration of this strategy does not significantly impact performance in most scenarios. You need to run tests and find the proper buffer size configuration that fits your needs. In Memory Repeatable Stream This configuration is the default for Mule Runtime Community Edition. It uses a default configured buffer size of 512 KB. If the stream is larger than that, the buffer is expanded to a default increment size of 512 KB until it reaches the configured maximum buffer size. If the stream exceeds this limit, the application fails. You can customize this behavior by setting the initial size of the buffer, the rate at which the buffer increases, the maximum buffer size, and the measurement unit. For example, these settings configure an in-memory repeatable stream with a 512 KB initial size, which grows at a rate of 256 KB and allows up to 2 MB of content in memory: <file:read <repeatable-in-memory-stream </file:read> Based on performance test, the default 512 KB buffer size, and 512 KB increment size configuration of this strategy does not significantly impact performance in most scenarios. You need to run tests and find the proper buffer size and size increment configuration that fits your needs. Non Repeatable Stream This strategy disables repeatable streams. It allows you to read an input stream only once. In case your use case does not really require the extra memory or performance overhead that come with repeatable stream. Since the stream is not being saved to memory, this is the most performant strategy that you can use. <file:read <non-repeatable-stream /> </file:read> Having this kind of configuration allows your flows to fail promptly if there’s a component in the configuration that is trying to access a big input stream before the actual streaming component is executed. Every component in Mule 4.0 that returns an InputStream or a Streamable collection supports repeatable streams. Some of these components are: File Connector FTP Connector DataBase Connector HTTP Connector Sockets SalesForce Connector Streaming Objects A similar scenario happens when an Anypoint Connector is configured to use auto-paging. Mule 4.0 automatically handles the paged output of the connector using Repeatable Auto Paging. This framework is similar to repeatable streams, as the connector receives the object, Mule sets a configurable in-memory buffer to save the object. However, while repeatable streams measure the buffer size in byte measurements, when handling objects the runtime measures the buffer size using instance counts. When calculating the in-memory buffer size for repeatable auto-paging, you need to estimate how much memory space each instance takes to avoid running out of memory. As with repeatable streams, you can use different strategies to configure how Mule handles the repeatable auto paging: Repeatable File Store Iterable This configuration is the default for Mule Runtime Enterprise Edition. It uses a default configured in-memory buffer of 500 objects. If your query returns more results than the buffer size, Mule serializes those objects and writes them to your disk. You can configure the number of objects Mule stores in the in-memory buffer. The more objects you save in memory, the better performance you get by avoiding writing to disk, For example, you can set a buffer size of 100 objects in memory for a query from the SalesForce Connector: <sfdc:query <ee:repeatable-file-store-iterable </sfdc:query> This interface uses the Kryo framework to serialize objects so it can write them to your disk. Plain old Java serialization fails if the object does not implement the Serializable interface. However if serialization contains another object that doesn’t implement the Serializable interface, Kryo is likely (but not guaranteed) to succeed. For example, a POJO containing an org.apache.xerces.jaxp.datatype.XMLGregorianCalendarImpl. Although Kryo serializer allows Mule to serialize objects that the JVM cannot serialize by default, some things can’t be serialized. It’s recommended to keep your objects simple. Repeatable In-Memory Iterable This configuration is the default for Mule Runtime Community Edition. It uses a default configured buffer size of 500 Objects. If the query result is larger than that, the buffer is expanded to a default increment size of 100 objects until it reaches the configured maximum buffer size. If the stream exceeds this limit, the application fails. You can customize the initial size of the buffer, the rate at which the buffer increases, and the maximum buffer size. For example, this configuration would set an in-memory buffer of 100 objects, that increments per 100 objects and allow a maximum size of 500 objects. <sfdc:query <repeatable-in-memory-iterable </sfdc:query>
https://docs.mulesoft.com/mule-runtime/4.1/streaming-about
2019-08-17T11:09:55
CC-MAIN-2019-35
1566027312128.3
[]
docs.mulesoft.com
Version control integration¶ Note Currently only integration with git and mercurial [<commit> [<commit>]] [<path>] See the git diff documentation for further explanation of “<commit>” and “<path>” for this command.. Instead, call nbdiff-web in the same way that you call git diff, e.g. git diff [<commit> [<commit>]] [path]. Mercurial integration¶ Integration of mercurial is similar to that for manual git registration, but it uses a separate set of entry points since amongst others, mercurial requires the diff extension to handle directories. Differs¶ To tell mercurial about nbdimes differs, open the appropriate config file (hg config --edit for the default user level one), and add the following entries: [extensions] extdiff = [extdiff] cmd.nbdiff = hg-nbdiff cmd.nbdiffweb = hg-nbdiffweb opts.nbdiffweb = --log-level ERROR - This will: - enable the external diff extension - register both the command line diff and web diff - set the default log level of the webdiff opts.<cmdname> allows you to customize which flags nbdime are called with. To use nbdime from mercurial, you can then call it like this: hg nbdiff <same arguments as for 'hg diff'> hg nbdiffweb <same arguments as for 'hg diff'> Mergetools¶ Add the following entries to the appropriate mercurial config file: [merge-tools] nbdime.priority = 2 nbdime.premerge = False nbdime.executable = hg-nbmerge nbdime.args = $base $local $other $output nbdimeweb.priority = 1 nbdimeweb.premerge = False nbdimeweb.executable = hg-nbmergeweb nbdimeweb.args = --log-level ERROR $base $local $other $output nbdimeweb.gui = True [merge-patterns] **.ipynb = nbdime - This will: - use the merge driver by default for notebook files - register the web tool The typical usage pattern for the webtool is like this: > hg merge <other branch> merging ***.ipynb 0 files updated, 0 files merged, 0 files removed, 1 files unresolved use 'hg resolve' to retry unresolved file merges or 'hg update -C .' to abandon > hg resolve --tool nbdimeweb
https://nbdime.readthedocs.io/en/stable/vcs.html
2019-08-17T10:52:47
CC-MAIN-2019-35
1566027312128.3
[array(['_images/nbdiff-web.png', "example of nbdime's content-aware diff"], dtype=object) array(['_images/nbmerge-web.png', "nbdime's merge with web-based GUI viewer"], dtype=object)]
nbdime.readthedocs.io
Gets a command to add a caption (numbered label) to an equation. readonly insertEquationsCaption: InsertEquationsCaptionCommand You can invoke this command by calling the execute method. The execute method checks the command state (obtained using the getState method) before executing, and decides whether the action can be performed. The execute and getState methods are members of the InsertEquationsCaptionCommand class. This command adds the "Equation {SEQ Equation }" text at the current position in the document. Usage example:
https://docs.devexpress.com/AspNet/js-RichEditCommands.insertEquationsCaption
2019-08-17T10:47:44
CC-MAIN-2019-35
1566027312128.3
[]
docs.devexpress.com
To make sure the correct address is being used for a shipment, the suggest-address endpoint allows you to verify and possibly suggest the address of a shipment. This endpoints differs a bit from the other endpoints, since it is an RPC endpoint. For now, the only supported country to suggest addresses for is the Netherlands (NL). The attributes that are required for an address suggestion differ per country. The endpoint will respond with a 422 Unprocessable Entity when not all required attributes are supplied for the country and will list all required attributes. Required Scope: addresses.suggest The following table displays required attributes and their types for postal code-based address suggestion: POST /suggest-address HTTP/1.1 Example: { "data": { "country_code": "NL", "postal_code": "2131 BC", "street_number": 679, "street_number_suffix": "A1" } } The API will try to suggest all possible matches using the data provided. { "data": [ { "country_code": "NL", "postal_code": "2131 BC", "street_number": 679, "street_number_suffix": "A1", "city": "Hoofddorp", "street_1": "Hoofdweg", "street_2": "Haarlemmermeer" } ] } When no address can be found to suggest, the data array will be empty.
https://docs.myparcel.com/api/rpc-endpoints/suggest-address/
2019-08-17T11:37:19
CC-MAIN-2019-35
1566027312128.3
[]
docs.myparcel.com
The AWS Service Broker provides access to Amazon Web Services (AWS) through the OKD service catalog. AWS services and componenets can be configured and viewed in both the OKD web console and the AWS dashboards. For information on installing the AWS Service Broker, see the AWS Service Broker Documentation in the Amazon Web Services - Labs docs repository.
https://docs.okd.io/3.11/architecture/service_catalog/aws_broker.html
2019-08-17T10:43:07
CC-MAIN-2019-35
1566027312128.3
[]
docs.okd.io
You are viewing the RapidMiner Server documentation for version 8.1 - Check here for latest version What's New in RapidMiner Server 8.1.1? Released: April 10th, 2018 The following describes the enhancements and bug fixes in RapidMiner Server 8.1.1: New Features - Added two new execution properties for timeout checks: jobservice.scheduled.agentTimeoutAfter: If a Job Agent does not send a heartbeat the Job Agent is flagged as timed-out after this period in milliseconds (default: 30000) jobservice.scheduled.jobTimeoutAfter: Jobs are flagged as timed-out after this period in milliseconds if no update/heartbeat for the job was received (default: 30000) Enhancements - Processes which contain dummy / unknown operators will now fail fast and will not be submitted to Job Agents - Processes which are executed by Job Agents will ignore breakpoints - Improved error message when remote Job Agent cannot connect to RapidMiner Server - Improved token renewal schedule for Job Container authorization - Job Container will no longer fail instantly if RapidMiner Server is temporarily unavailable during Job start. Instead, it will try to re-establish the connection up to five times - In the Job Agent, separated the bundled extensions folder from the custom extensions folder by moving it to engine/plugins Bugfixes - Fixed problem with PostgresSQL which led to the execution details page not being displayed - Fixed RapidMiner Server installer failure for Oracle DB - Fixed a problem with Trigger permission handling which allowed users without execute permission to run processes
https://docs.rapidminer.com/8.1/server/releases/changes-8.1.1.html
2019-08-17T11:10:55
CC-MAIN-2019-35
1566027312128.3
[]
docs.rapidminer.com
Blueworx Voice Response can be set up to operate in more than one language. One Blueworx Voice Response voice application can play voice to callers in multiple languages. The Blueworx Voice Response window text can display in different languages to different people at the same time. In addition, the keyboard for the pSeries computer can use the character sets for different languages. To find out which languages are delivered with this release of Blueworx Voice Response, see the README file in /usr/lpp/dirTalk/readme.
http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.config.doc/i628427.html
2019-08-17T10:39:26
CC-MAIN-2019-35
1566027312128.3
[]
docs.blueworx.com
Content Views helps you to show post info (thumbnail, title, content, meta fields, etc.) easily without coding. You can show info vertically, or show thumbnail (image) on the left/right of text. To show thumbnail (image) on the left/right of text, please: - select the Grid layout - select options as below (in the “Display Settings” tab): Best regards,
https://docs.contentviewspro.com/show-thumbnail-image-left-right-text/
2019-08-17T10:46:33
CC-MAIN-2019-35
1566027312128.3
[]
docs.contentviewspro.com
Graph storage in the DSE database keyspace and tables Describes graph storage in the DSE database at a high level. DSE Graph uses the DSE database to store schema and data. Two DSE database keyspaces are created for each graph, <graphname> and <graphname_system>. For example, for a graph called food, the two keyspaces created will be food and food_system. The first keyspace food will hold the data for the graph. The second keyspace food_system holds schema and other system data about the graph. In the <graphname> keyspace, two tables are created for each vertex label to store vertex and edge information, vertexLabel_p and vertexLabel_e, respectively. For example, for a vertex label author, two tables are created, author_p and author_e.
https://docs.datastax.com/en/dse/6.7/dse-dev/datastax_enterprise/graph/reference/refCassKSandTables.html
2019-08-17T11:04:28
CC-MAIN-2019-35
1566027312128.3
[]
docs.datastax.com
You are viewing the RapidMiner Server documentation for version 8.2 - Check here for latest version What's New in RapidMiner Server 8.0.0? Released: December 04th, 2017 The following describes the enhancements and bug fixes in RapidMiner Server 8.0.0: Important notes The process execution on RapidMiner Server has fundamentally changed. Please visit this page for more information before upgrading! New features - Processes are no longer executed inside RapidMiner Server itself but instead run externally. The process execution is managed by so called Job Agents, which do not need to run on the same machine as RapidMiner Server. This allows for total horizontal scalability of process execution. Enhancements - License expiration warnings are no longer shown if an upcoming license is available. - Webservices can no longer sometimes block the entire RapidMiner Server instance despite using almost no CPU or memory resources. For faster performance, they also now run single-threaded by default (each web service uses only 1 CPU core). To enable parallel execution, set the property com.rapidanalytics.webservices.concurrency to the number of threads that should be used by each web service call. Please be sure to read the information available here before doing so! - It is now possible to change the target queue of scheduled processes via the UI. - Scheduled Processes now display a warning if their queue does not exist anymore/has no Job Agents connected to it. - Triggers list now has tooltips. - Triggers now display a warning if their queue does not exist anymore/has no Job Agents connected to it. - Added com.rapidanalytics.security.x_frame_options property to allow administrators to disable embedding elements of RM Server into other websites. - Added com.rapidanalytics.security.access_control_allow_origin and related properties to enable administrators to allow CORS. See Server Settings for more details about the new properties. - All triggers are paused while waiting for pending migration steps and will be started as soon as the migration is completed. - Installer now contains more documentation on the first page, especially in regards to an upgrade. - Version number is now shown in the installer. Bugfixes - Database connections can now be tested without having to save them first - Deleting a property in the System Settings UI now resets its value properly in the database - Fixed an issue that kept obsolete data in the database - Session cookie can no longer be accessed by scripts - Fixed some Server error responses - Fixed an issue that could cause errors during LDAP authentication - Fixed an issue that could prevent Salesforce connections to work on Server - Fixed some problems regarding PostgreSQL databases - Upgraded PostgreSQL JDBC driver to version 42.1.4 - License expiration warning is no longer shown to users who are not logged in - Various fixes behind the scenes
https://docs.rapidminer.com/8.2/server/releases/changes-8.0.0.html
2019-08-17T10:42:19
CC-MAIN-2019-35
1566027312128.3
[]
docs.rapidminer.com
ThoughtSpot uses replication of stored data. When a disk goes bad, ThoughtSpot continues to operate. Replacement of a bad disk should be initiated through ThoughtSpot Support in this event, at your earliest convenience. Symptoms You should suspect disk failure if you observe these symptoms: - Performance degrades significantly. - You receive alert emails beginning with WARNING or CRITICAL that contain DISK_ERROR in the subject. If you notice these symptoms, contact ThoughtSpot Support. Disk replacement The guidelines for disk replacement are: - Losing one or two disks: The cluster continues to operate, but you should replace the disk(s) at the earliest convenience. - Losing more than two disks: The cluster continues to operate, but the application may be inaccessible. Replace the disks to restore original operation. Disk replacement is done on site by ThoughtSpot Support. Disks can be replaced while ThoughtSpot is running. However the disk replacement procedure involves a node restart, so a disruption of up to five minutes can happen, depending on what services are running on that node.
https://docs.thoughtspot.com/5.2/disaster-recovery/disk-failure.html
2019-08-17T11:54:03
CC-MAIN-2019-35
1566027312128.3
[]
docs.thoughtspot.com
discard_block_engine is a pseudo-random number generator adaptor that discards a certain amount of data produced by the base engine. From each block of size P generated by the base engine, the adaptor keeps only R numbers, discarding the rest. © cppreference.com Licensed under the Creative Commons Attribution-ShareAlike Unported License v3.0.
https://docs.w3cub.com/cpp/numeric/random/discard_block_engine/
2019-08-17T10:53:40
CC-MAIN-2019-35
1566027312128.3
[]
docs.w3cub.com
Indium¶ Indium is a JavaScript development environment for Emacs. Indium is Free Software, licensed under the GPL v3.0. You can follow its development on GitHub. Indium connects to a browser tab or nodejs process and provides several features for JavaScript development, including: - a REPL (with auto completion) & object inspection; - an inspector, with history and navigation; - a scratch buffer ( M-x indium-scratch); - JavaScript evaluation in JS buffers with indium-interaction-mode; - a stepping Debugger, similar to edebug, or cider. This documentation can be read online at and in Info format (within Emacs with (info "Indium")). It is also available in Info format and can be consulted from within Emacs with C-h i m indium RET. Table of contents¶ - Installation - Getting up and running - The REPL - Interaction in JS buffers - The stepping debugger - The inspector - Troubleshooting
https://indium.readthedocs.io/en/latest/
2019-08-17T11:47:14
CC-MAIN-2019-35
1566027312128.3
[]
indium.readthedocs.io
If the default password for the Orchestrator configuration interface is changed, you cannot retrieve it because Orchestrator uses encryption to encode passwords. You can revert to the default password vmware if the current password is not known. Procedure - Navigate to the location of the passwd.properties configuration file. - Open the passwd.properties file in a text editor. - Delete the contents of the file. - Add the following line to the passwd.properties file.\=\= - Save the passwd.properties file. If you are using the Orchestrator Appliance, you might need to set the ownership of the passwd.properties file by running the chown vco.vco passwd.properties command. - Restart the vRealize Orchestrator Configuration service. Results You can log in to the Orchestrator configuration interface with the default credentials. User name: vmware Password: vmware
https://docs.vmware.com/en/vRealize-Orchestrator/6.0.1/com.vmware.vrealize.orchestrator-install-config.doc/GUID-F265B070-1A80-4478-9D3A-1B4E61F81500.html
2018-12-10T03:12:28
CC-MAIN-2018-51
1544376823236.2
[]
docs.vmware.com
DescribePipelines. Request Syntax { "pipelineIds": [ " string" ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - pipelineIds The IDs of the pipelines to describe. You can pass as many as 25 identifiers in a single call. To obtain pipeline IDs, call ListPipelines. Type: Array of strings Length Constraints: Minimum length of 1. Maximum length of 1024. Pattern: [\u0020-\uD7FF\uE000-\uFFFD\uD800\uDC00-\uDBFF\uDFFF\n\t]* Required: Yes Response Syntax { "pipelineDescriptionList": [ { "description": "string", "fields": [ { "key": "string", "refValue": "string", "stringValue": "string" } ], "name": "string", "pipelineId": "string", "tags": [ { "key": "string", "value": "string" } ] } ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - pipelineDescriptionList An array of descriptions for the specified pipelines. Type: Array of PipelineDescription objects Example Sample Request POST / HTTP/1.1 Content-Type: application/x-amz-json-1.1 X-Amz-Target: DataPipeline.DescribePipelines Content-Length: 70 Host: datapipeline.us-east-1.amazonaws.com X-Amz-Date: Mon, 12 Nov 2012 17:49:52 GMT Authorization: AuthParams {"pipelineIds": ["df-08785951KAKJEXAMPLE"] } Sample Response x-amzn-RequestId: 02870eb7-0736-11e2-af6f-6bc7a6be60d9 Content-Type: application/x-amz-json-1.1 Content-Length: 767 Date: Mon, 12 Nov 2012 17:50:53 GMT {"pipelineDescriptionList": [ {"description": "This is my first pipeline", "fields": [ {"key": "@pipelineState", "stringValue": "SCHEDULED"}, {"key": "description", "stringValue": "This is my first pipeline"}, {"key": "name", "stringValue": "myPipeline"}, {"key": "@creationTime", "stringValue": "2012-12-13T01:24:06"}, {"key": "@id", "stringValue": "df-0937003356ZJEXAMPLE"}, {"key": "@sphere", "stringValue": "PIPELINE"}, {"key": "@version", "stringValue": "1"}, {"key": "@userId", "stringValue": "924374875933"}, {"key": "@accountId", "stringValue": "924374875933"}, {"key": "uniqueId", "stringValue": "1234567890"} ], "name": "myPipeline", "pipelineId": "df-0937003356ZJEXAMPLE"} ] } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/datapipeline/latest/APIReference/API_DescribePipelines.html
2018-12-10T02:24:36
CC-MAIN-2018-51
1544376823236.2
[]
docs.aws.amazon.com
Scheduling Best Practices No matter which app you schedule your meeting from, here are some helpful tips to follow when setting up meetings. Topics Create a Personalized Link When you create an account with Amazon Chime, you receive a 10-digit Personal Meeting ID. To make it easier for attendees to join your meetings, you can create a personalized link by choosing Add Personalized Link under your name. Create a name that is easy for people to associate with you, like your email address prefix or your name. The name must be at least 12 alpha-numeric characters or longer (special characters are not included in the character count). Amazon Chime makes sure it’s unique in the system, and automatically adds it to your meeting instructions. Help Mobile Users Join Your Meeting When inviting mobile users to your meeting, copy and paste the One-click Mobile Dial-in into the Location field of your meeting invite. When the calendar reminder appears for the meeting on their mobile devices, they can choose the string to dial in automatically and enter the Personal Meeting ID. Enable or Disable Auto-call When your meeting starts, Amazon Chime can call every attendee automatically on all registered devices with Auto-call. You and your attendees don’t have to watch the calendar to join the meeting. To enable Auto-call, make sure that [email protected] is invited to your meeting. To avoid having everyone’s devices ring at the same time (for example, if everyone is in the same office), remove [email protected] from the invitee list. You can also remove [email protected] if your attendees would rather just open the invite and choose the meeting link, Invite a Distribution List If you have a weekly or monthly meeting with a large team or department, and you don't want to invite individual users one-by-one, schedule the meeting with [email protected], then delete [email protected]. Attendees can open the meeting link in the instructions, choose Meetings, Join a Meeting, and enter the PIN manually. Use caution when using distribution lists with [email protected]. To have Amazon Chime initiate the call, you must list individual users. Change Meeting Details When changing meeting details or adding [email protected] to an existing meeting, remember to choose Send Updates to All.
https://docs.aws.amazon.com/chime/latest/ug/chime-scheduling-best-practices.html
2018-12-10T02:24:55
CC-MAIN-2018-51
1544376823236.2
[]
docs.aws.amazon.com
Configure stateful reliable services There are two sets of configuration settings for reliable services. One set is global for all reliable services in the cluster while the other set is specific to a particular reliable service. Global Configuration The global reliable service configuration is specified in the cluster manifest for the cluster under the KtlLogger section. It allows configuration of the shared log location and size plus the global memory limits used by the logger. The cluster manifest is a single XML file that holds settings and configurations that apply to all nodes and services in the cluster. The file is typically called ClusterManifest.xml. You can see the cluster manifest for your cluster using the Get-ServiceFabricClusterManifest powershell command. Configuration names In Azure ARM or on-premises JSON template, the example below shows how to change the shared transaction log that gets created to back any reliable collections for stateful services. "fabricSettings": [{ "name": "KtlLogger", "parameters": [{ "name": "SharedLogSizeInMB", "value": "4096" }] }] Sample local developer cluster manifest section If you want to change this on your local development environment, you need to edit the local clustermanifest.xml file. <Section Name="KtlLogger"> <Parameter Name="SharedLogSizeInMB" Value="4096"/> <Parameter Name="WriteBufferMemoryPoolMinimumInKB" Value="8192" /> <Parameter Name="WriteBufferMemoryPoolMaximumInKB" Value="8192" /> <Parameter Name="SharedLogId" Value="{7668BB54-FE9C-48ed-81AC-FF89E60ED2EF}"/> <Parameter Name="SharedLogPath" Value="f:\SharedLog.Log"/> </Section> Remarks The logger has a global pool of memory allocated from non paged kernel memory that is available to all reliable services on a node for caching state data before being written to the dedicated log associated with the reliable service replica. The pool size is controlled by the WriteBufferMemoryPoolMinimumInKB and WriteBufferMemoryPoolMaximumInKB settings. WriteBufferMemoryPoolMinimumInKB specifies both the initial size of this memory pool and the lowest size to which the memory pool may shrink. WriteBufferMemoryPoolMaximumInKB is the highest size to which the memory pool may grow. Each reliable service replica that is opened may increase the size of the memory pool by a system determined amount up to WriteBufferMemoryPoolMaximumInKB. If there is more demand for memory from the memory pool than is available, requests for memory will be delayed until memory is available. Therefore if the write buffer memory pool is too small for a particular configuration then performance may suffer. The SharedLogId and SharedLogPath settings are always used together to define the GUID and location for the default shared log for all nodes in the cluster. The default shared log is used for all reliable services that do not specify the settings in the settings.xml for the specific service. For best performance, shared log files should be placed on disks that are used solely for the shared log file to reduce contention. SharedLogSizeInMB specifies the amount of disk space to preallocate for the default shared log on all nodes. SharedLogId and SharedLogPath do not need to be specified in order for SharedLogSizeInMB to be specified. Service Specific Configuration You can modify stateful Reliable Services' default configurations by using the configuration package (Config) or the service implementation (code). - Config - Configuration via the config package is accomplished by changing the Settings.xml file that is generated in the Microsoft Visual Studio package root under the Config folder for each service in the application. - Code - Configuration via code is accomplished by creating a ReliableStateManager using a ReliableStateManagerConfiguration object with the appropriate options set. By default, the Azure Service Fabric runtime looks for predefined section names in the Settings.xml file and consumes the configuration values while creating the underlying runtime components. Note Do not delete the section names of the following configurations in the Settings.xml file that is generated in the Visual Studio solution unless you plan to configure your service via code. Renaming the config package or section names will require a code change when configuring the ReliableStateManager. Replicator security configuration Replicator security configurations are used to secure the communication channel that is used during replication. This means that services will not be able to see each other's replication traffic, ensuring that the data that is made highly available is also secure. By default, an empty security configuration section prevents replication security. Important On Linux nodes, certificates must be PEM-formatted. To learn more about locating and configuring certificates for Linux, see Configure certificates on Linux. Default section name ReplicatorSecurityConfig Note To change this section name, override the replicatorSecuritySectionName parameter to the ReliableStateManagerConfiguration constructor when creating the ReliableStateManager for this service. Replicator configuration Replicator configurations configure the replicator that is responsible for making the stateful Reliable Service's state highly reliable by replicating and persisting the state locally. The default configuration is generated by the Visual Studio template and should suffice. This section talks about additional configurations that are available to tune the replicator. Default section name ReplicatorConfig Note To change this section name, override the replicatorSettingsSectionName parameter to the ReliableStateManagerConfiguration constructor when creating the ReliableStateManager for this service. Configuration names Sample configuration via code class Program { /// <summary> /// This is the entry point of the service host process. /// </summary> static void Main() { ServiceRuntime.RegisterServiceAsync("HelloWorldStatefulType", context => new HelloWorldStateful(context, new ReliableStateManager(context, new ReliableStateManagerConfiguration( new ReliableStateManagerReplicatorSettings() { RetryInterval = TimeSpan.FromSeconds(3) } )))).GetAwaiter().GetResult(); } } class MyStatefulService : StatefulService { public MyStatefulService(StatefulServiceContext context, IReliableStateManagerReplica stateManager) : base(context, stateManager) { } ... } Sample configuration file <?xml version="1.0" encoding="utf-8"?> <Settings xmlns: <Section Name="ReplicatorConfig"> <Parameter Name="ReplicatorEndpoint" Value="ReplicatorEndpoint" /> <Parameter Name="BatchAcknowledgementInterval" Value="0.05"/> <Parameter Name="CheckpointThresholdInMB" Value="512" /> </Section> <Section Name="ReplicatorSecurityConfig"> <Parameter Name="CredentialType" Value="X509" /> <Parameter Name="FindType" Value="FindByThumbprint" /> <Parameter Name="FindValue" Value="9d c9 06 b1 69 dc 4f af fd 16 97 ac 78 1e 80 67 90 74 9d 2f" /> <Parameter Name="StoreLocation" Value="LocalMachine" /> <Parameter Name="StoreName" Value="My" /> <Parameter Name="ProtectionLevel" Value="EncryptAndSign" /> <Parameter Name="AllowedCommonNames" Value="My-Test-SAN1-Alice,My-Test-SAN1-Bob" /> </Section> </Settings> Remarks BatchAcknowledgementInterval controls replication latency. A value of '0' results in the lowest possible latency, at the cost of throughput (as more acknowledgement messages must be sent and processed, each containing fewer acknowledgements). The larger the value for BatchAcknowledgementInterval, the higher the overall replication throughput, at the cost of higher operation latency. This directly translates to the latency of transaction commits. The value for CheckpointThresholdInMB controls the amount of disk space that the replicator can use to store state information in the replica's dedicated log file. Increasing this to a higher value than the default could result in faster reconfiguration times when a new replica is added to the set. This is due to the partial state transfer that takes place due to the availability of more history of operations in the log. This can potentially increase the recovery time of a replica after a crash. The MaxRecordSizeInKB setting defines the maximum size of a record that can be written by the replicator into the log file. In most cases, the default 1024-KB record size is optimal. However, if the service is causing larger data items to be part of the state information, then this value might need to be increased. There is little benefit in making MaxRecordSizeInKB smaller than 1024, as smaller records use only the space needed for the smaller record. We expect that this value would need to be changed in only rare cases. The SharedLogId and SharedLogPath settings are always used together to make a service use a separate shared log from the default shared log for the node. For best efficiency, as many services as possible should specify the same shared log. Shared log files should be placed on disks that are used solely for the shared log file to reduce head movement contention. We expect that this value would need to be changed in only rare cases.
https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-configuration
2018-12-10T03:09:07
CC-MAIN-2018-51
1544376823236.2
[]
docs.microsoft.com
BITOR() — Returns the mask of bits set in either of two BIGINT values BITOR( value, value ) The BITOR) function returns the mask of bits set in either of two BIGINT integers. In other words, it performs a bitwise OR operation on the two arguments. The result is returned as a new BIGINT value — the arguments to the function are not modified. The left-most bit of an integer number is the sign bit, but has no special meaning for bitwise operations. However, The left-most bit set to 1 followed by all zeros is reserved as the NULL value. If you use a NULL value as an argument, you will receive a NULL response. But in all other circumstances (using non-NULL BIGINT arguments), the bitwise functions should never return a NULL result. Consequently any bitwise operation that would result in only the left-most bit being set, will generate an error at runtime.
https://docs.voltdb.com/UsingVoltDB/sqlfuncbitor.php
2018-12-10T02:09:15
CC-MAIN-2018-51
1544376823236.2
[]
docs.voltdb.com
Stripe Table of Contents The Stripe payment gateway built into Restrict Content Pro allows you to use your Stripe.com account with Restrict Content Pro to accept credit cards directly on your website. Configuring Stripe is simple and only takes a few moments. First, go to Restrict > Settings > Payments and enable Stripe:. Payment Flow With Stripe, will MUST select Test for the Mode option. You can also create a second endpoint and select "Test" for the Mode. Note: If your webhook becomes unresponsive or starts redirecting to another page, payments may be delayed, so it's important that your URL be entered correctly and remain active. You can check to see if webhooks are being successfully processed by visiting the "Webhooks" section in your Stripe dashboard. "4242424242424242" for card number. (Other test card numbers are available here.) - Enter 12 / 2020 for the expiration. Any date in the future will work. - Click "Register". - You should now be redirect to the success page and logged-in as your new user. - Check your Stripe.com account history, you will see the test transaction. - Your new user now has a fully activated account. Stripe API version Restrict Content Pro has been tested up to Stripe API version 2018-02-06.
https://docs.restrictcontentpro.com/article/1549-stripe
2018-12-10T03:29:30
CC-MAIN-2018-51
1544376823236.2
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/5539647ae4b0a2d7e23f76f0/file-ZrxSa3Toy3.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/5af055a82c7d3a3f981f4eef/file-1bQ2fxTy7B.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/5af055e72c7d3a3f981f4ef4/file-SbkMXU1lTB.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/55392130e4b0a2d7e23f7569/file-ax9MsS7L3x.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/553965cae4b0a2d7e23f76ff/file-dyibGnaqpr.png', None], dtype=object) ]
docs.restrictcontentpro.com
How can I fix the table styling? Some of the shortcodes in Restrict Content Pro add tables to your site, like the table on the "Your Membership" page: But if your theme doesn't include any styling for tables, the display might look more like this: Luckily a few quick lines of CSS can get things looking much cleaner! If you're not sure how to add CSS to your theme, you can use a plugin like Simple Custom CSS or you can read up on how to create a child theme. This line of CSS will do a lot towards getting the table looking presentable: .rcp-table { width: 100%; } Or if you want things to look a bit more stylized, here's a larger snippet you can use: .rcp-table { border-collapse: separate; border-spacing: 0; border-width: 1px 0 0 1px; margin: 0 0 1.75em; table-layout: fixed; width: 100%; } .rcp-table, .rcp-table th, .rcp-table td { border: 1px solid #d1d1d1; } .rcp-table th, .rcp-table td { padding: 5px; }
https://docs.restrictcontentpro.com/article/1592-how-can-i-fix-the-table-styling
2018-12-10T03:29:24
CC-MAIN-2018-51
1544376823236.2
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/58333b1ac697916f5d0533d0/file-aqpFw0XB3a.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/58333b5bc697916f5d0533d5/file-eKloCjB3ZN.png', None], dtype=object) ]
docs.restrictcontentpro.com
VoltDB is available in both open source and commercial editions. The open source, or community, edition provides all the transactional performance benefits of VoltDB, plus basic durability and availability. The commercial editions provide additional features needed to support production environments, such as complete durability, dynamic scaling, and WAN replication. Depending on which version you choose, the VoltDB software comes as either pre-built distributions or as source code. This chapter explains the system requirements for running VoltDB, how to install and upgrade the software, and what resources are provided in the kit.
https://docs.voltdb.com/UsingVoltDB/ChapGetStarted.php
2018-12-10T02:49:26
CC-MAIN-2018-51
1544376823236.2
[]
docs.voltdb.com
Format function (Visual Basic for Applications) Returns a Variant (String) containing an expression formatted according to instructions contained in a format expression. Syntax Format(Expression, [Format], [FirstDayOfWeek], [FirstWeekOfYear]) The Format function syntax has these parts. Settings. Date symbols Time symbols Example This example shows various uses of the Format function to format values using both named formats and user-defined formats. For the date separator (/), time separator (:), and AM/ PM literal, the actual formatted output displayed by your system depends on the locale settings, English/U.S./pm") ' Returns "05:04:23 pm". MyStr = Format(MyTime, "hh:mm:ss AM/PM") '". Different formats for different numeric values A user-defined format expression for numbers can have from one to four sections separated by semicolons. If the format argument contains one of the named numeric formats, only one section is allowed. "$#,#\o" Different formats for different string values A format expression for strings can have one section or two sections separated by a semicolon (;). Named date/time formats The following table identifies the predefined date and time format names. Named numeric formats The following table identifies the predefined numeric format names. User-defined string formats You can use any of the following characters to create a format expression for strings. User-defined date/time formats The following table identifies characters you can use to create user-defined date/time formats. User-defined numeric formats The following table identifies characters you can use to create user-defined number formats.
https://docs.microsoft.com/en-us/office/vba/Language/Reference/User-Interface-Help/format-function-visual-basic-for-applications
2018-12-10T01:52:06
CC-MAIN-2018-51
1544376823236.2
[]
docs.microsoft.com
Performing service actions Useto manage service operations on your cluster. - In the Services tab, click Actions. - In Actions, click an option.Available options depend on the service you have selectedFor example, HDFS service action options include:Clicking Turn On Maintenance Mode suppresses alerts and status indicator changes generated by the service, while allowing you to start, stop, restart, move, or perform maintenance tasks on the service.
https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/managing-and-monitoring-ambari/content/amb_performing_service_actions.html
2018-12-10T03:02:13
CC-MAIN-2018-51
1544376823236.2
[]
docs.hortonworks.com
Software Download Directory Live Forms v8.0 is no longer supported. Please visit Live Forms Latest for our current Cloud Release. Earlier documentation is available too. This page contains the compatibility release matrix supported for the Live Forms Confluence Add-on. frevvo only supports Confluence Enterprise versions. New Confluence Enterprise version releases will only be certified with the latest Live Forms release. Customers must upgrade to the latest major Live Forms release compatible with the Confluence Enterprise version. Download the version of the Confluence Add-on that you need from our frevvo Software Downloads Directory 8/4/2016 This frevvo CF plugin is compatible with Confluence server v5.10.*. Customers running Confluence 5.10.* must upgrade to this version of the plugin. Tickets Fixed: 8/4/2015 This frevvo CF plugin is compatible with Confluence server v5.8.*. Tickets Fixed: 7/15/2014 This frevvo CF plugin is compatible with Confluence server v5.4.x, v5.5.x and v5.6.x and Live Forms server v5.3.3 or later and Live Forms server v6.1.2.2 or later. The Live Forms server 6.1.2.2 or later is also compatible with Confluence server v5.7 New Features: Confluence user profile attributes available in frevvo's Subject for use in business rules as subject.confluence.* as shown in the example below: if (form.load) { Email.value = _data.getParameter('subject.email'); Location.value = _data.getParameter('subject.confluence.location'); Phone.value = _data.getParameter('subject.confluence.phone'); IM.value = _data.getParameter('subject.confluence.im'); Website.value = _data.getParameter('subject.confluence.website'); Position.value = _data.getParameter('subject.confluence.position'); Department.value = _data.getParameter('subject.confluence.department'); } Tickets fixed:
https://docs.frevvo.com/d/pages/viewpage.action?pageId=21532106
2020-03-28T19:57:49
CC-MAIN-2020-16
1585370493120.15
[array(['/d/images/icons/linkext7.gif', None], dtype=object)]
docs.frevvo.com
Feature: #70078 - Extensions can provide a class map for class loading¶ See Issue #70078 Description¶ With the old class loader it was possible for extension authors to register several classes in an ext_autoload.php file. This possibility was completely removed with introduction of composer class loading. In composer mode, one can fully benefit from composer and its class loading options. However TYPO3 installations in non composer mode (extracted and symlinked archive of sources), lack this functionality completely. Now it is possible to provide a class map section in either the composer.json file or the ext_emconf.php file. This section will be evaluated and used also in non composer mode. Example ext_emconf.php file: <' ), 'classmap' => array( 'Resources/PHP/Libs' ) ) ); In the example configuration the path Resources/PHP/Libs is parsed for PHP files which are automatically added to the class loader.
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/7.5/Feature-70078-ExtensionsCanProvideAClassMapForClassLoading.html
2020-03-28T21:52:37
CC-MAIN-2020-16
1585370493120.15
[]
docs.typo3.org
Continuous Integration¶ Note For advice on writing your tests, see Testing Your Code. Why?¶ Martin Fowler, who first wrote about Continuous Integration (short: CI) together with Kent Beck, describes¶ Jenkins CI is an extensible Continuous Integration engine. Use it. Tox¶ tox is an automation tool providing packaging, testing, and deployment of Python software right from the console or CI server. It is a generic virtualenv management and test command line tool which provides the following features: - Checking that packages install correctly with different Python versions and interpreters - Running tests in each of the environments, configuring your test tool of choice - Acting as a front-end to Continuous Integration servers, reducing boilerplate and merging CI and shell-based testing Travis-CI¶.
https://python-guide.readthedocs.io/en/latest/scenarios/ci/
2020-03-28T19:54:25
CC-MAIN-2020-16
1585370493120.15
[array(['../../_images/33907150594_9abba7ad0a_k_d.jpg', '../../_images/33907150594_9abba7ad0a_k_d.jpg'], dtype=object)]
python-guide.readthedocs.io
... - Visual Designer Tool for building Rich User interfaces - A robust Node.js server based on Express (see Starting and Ending Profound.js Instances) - Support for Strong Data Types, including)
https://docs.profoundlogic.com/pages/diffpagesbyversion.action?pageId=31752433&selectedPageVersions=14&selectedPageVersions=15
2020-03-28T20:23:33
CC-MAIN-2020-16
1585370493120.15
[]
docs.profoundlogic.com
JavaScript Data Types Data types JavaScript is a dynamic type language, which means that you don't need to define type of the variable because it is effectively used by the JavaScript engine. There are two types of data in JavaScript: primitive data type and non-primitive (reference) data type. A variable in JavaScript contains any of these data types: strings, numbers, objects: var length = 17; // Number var lastName = "Jameson"; // String var x = {firstName:"John", lastName:"Doe"}; // Object Programming languages which allow this kind of things are called “dynamically typed”, when there are data types, but variables aren’t bound to any of them. There are 7 primitive data types in JavaScript: - Number - Bigint - String - Boolean - Null - Undefined - Symbol - and Object Primitive values All mentioned above types are incapable of being changed except objects. For example, Strings are unchangeable, so we call them "primitive values". Number The Number type are integer and floating point numbers. There are many operations for numbers: addition +, subtraction -, multiplication*, division / and others. var num1 = 32; var num2 = +Infinity; We can write numbers with or without a decimal point. They can also be +Infinity, -Infinity, and NaN (not a number). Infinity represents the mathematical Infinity ∞, which is a special value that’s bigger than any number. We can get it dividing by zero: console.log( 1 / 0 ); // Infinity NaN is a computational error, which is a result of an incorrect or an undefined mathematical operation, for example: console.log( "not a number" / 2 ); // NaN, such division is erroneous Any operation on NaN returns NaN: console.log( "not a number" / 2 + 5 ); // NaN NaN in a mathematical expression can influences to the whole result. More information about working with numbers you can find in Numbers. BigInt In JavaScript the BigInt type is a numeric primitive that can represent whole number with random exactness. BigInt gives you an opportunity safely store and operate on large integers even beyond the safe integer limit for Numbers. In JavaScript, the “number” type can’t represent integer values, which are larger than 253 or less than -253 for negatives. It is a technical limitation caused by their internal representation. That’s about 16 decimal digits, but sometimes we need really big numbers, e.g. for cryptography or microsecond-precision timestamps. const bigint = 8562323159475639012345678901234567890n; const sameBigint = BigInt("8562323159475639012345678901234567890"); const bigintNumber = BigInt(10); // same as 10n BigInt type was added to the language not long ago and it represents integers of arbitrary length. A BigInt is created by added n to the end of an integer literal: const bigint = 8562323159475639012345678901234567890n; Click BigInt to find more information about working with BigInt. String We use strings for storing text. In JavaScript, strings can’t be changed, and they must be inside of either double or single quotes. let str = "Hello"; let str2 = 'Welcome to W3Docs!'; let phrase = `${str} dear friend`; There are 3 types of quotes in JavaScript: - Double quotes:"Welcome". - Single quotes: 'Welcome'. - Backticks: `Welcome`. Double and single quotes are “simple” quotes, practically there is no difference between them in JavaScript. Backticks are “extended functionality” quotes that allow us to embed variables and expressions into a string by wrapping them in ${…}, for instance: Example of the embed a variable in string: let name = "W3Docs"; // embed a variable console.log( `Welcome to ${name}!` ); // Welcome to W3Docs! Example of the embed an expression in string: console.log( `the result is ${5 + 3}` ); // the result is 8 The expression inside ${…} is assessed, it’s result usually becomes a part of the string. We can put anything in there: name, an arithmetical expression or something more complex. But it can only be done in backticks, other quotes don’t have this embedding functionality. console.log( "the result is ${2 + 5}" ); This example returns string - “the result is ${2 + 5}” ,here you don’t need the double quotes. We’ll see more about working with strings in the chapter Strings. Boolean Boolean is a datatype which has just two values: true or false. Boolean is used in JavaScript as a function for getting the value of an object, variables, expressions, conditions, etc. This type is used to store yes/no values, where true means “yes, correct”, and false means “no, incorrect”. In the example x1 and x2 stores the boolean value i.e. true and false: var x1 = true; var x2 = false; var x1 ="true"; // not boolean values, it’s a string var x2 ="false"; // not boolean values, it’s a string Boolean values also can be as a result of comparisons: let isSmaller = 5 < 3; console.log( isSmaller ); //false (the comparison result is "no") We’ll cover strings more thoroughly in Boolean. Null It is one of JavaScript's primitive values which is treated as falsy for boolean operations. In JavaScript null means "nothing", “empty”. it’s something that doesn't exist. But in JavaScript, the data type of null is an object, which you can empty by setting an object to null: let price = null; The code above declares that price is unknown or empty for some reason. Undefined The special value undefined makes a type of its own, just like null. In JavaScript, a variable without a value called undefined. It’s value and type are undefined, which means that the “value is not assigned”: let "undefined"; console ); // shows Yet, technically, it is possible to assign undefined to any variable: let x = 123; x = undefined; console.log(x); // "undefined" But we don’t recommend doing that. Usually, we use null to assign an “unknown” or “empty” value to a variable, and we use undefined for checking if a variable has been assigned. Symbol A Symbol is an immutable primitive value that is unique and can be used as the key of an Object property. In some programming languages, Symbols are called "atoms". A value which has the data type Symbol can be referred to as a "Symbol value". A symbol value is created in a JavaScript by invoking the function Symbol, which produces an unnamed, unique value. A symbol can be used as an object property. A Symbol value represents a unique identifier. For example, two symbols with the same description: let symbol1 = Symbol("symbol") let symbol2 = Symbol("symbol") console.log(symbol1 === symbol2) // returns "false" In this example, console.log returns false. Symbols have to be unique, even if we create a lot of symbols with the same description, they are different values. You can find more information in Symbol. Object We use objects to store keyed collections of various data and more complicated entities. In JavaScript, objects pass through almost every aspect of the language. First, we must understand them, then just go into depth. Object is not a primitive data type, it’s a collection of properties, which can reference any type of data, including objects and/or primitive values. Objects can be created with figure brackets {…} with a list of properties. var obj = { key1: 'value1', key2: 'value2', key3: true, key4: 20, key5: {} } Let’s imagine that the object is a cabinet with signed files, where is stored every piece of data by the key. It’s rather easy to find a file by its name, add or remove a file. An empty object (“cabinet”) can be created with the help of one of two syntaxes: let obj = new Object(); // "object constructor" syntax let obj = {}; // "object literal" syntax In this case we usually use the figure brackets {...} and call that declaration an object literal. The typeof operator typeof operator helps us to see which type is stored in a variable. It supports 2 forms of syntax: - As an operator: typeof x; - As a function: typeof(x); It can work with or without parentheses, the result will be the same. The example below shows, that the call to typeof x returns a string with the type name: typeof undefined; // "undefined" typeof 0; // "number" typeof 15n; // "bigint" typeof false; // "boolean" typeof "foo"; // "string" typeof Symbol("id"); // "symbol" typeof Math; // "object" typeof null; // "object" typeof prompt; // "function" - Math is a built-in object. Math provides mathematical operations.You can find more information about Math in Math. - null is not an object, but the result of typeof null is "object", which is an error in typeof. null is a special value with a separate type of its own. - The result of typeof prompt is "function", because prompt is a function. We’ll study functions in Functions. Functions are the part of the object type, but typeof returning "function". Type Conversions We can convert JavaScript variables to a new variable and another data type: using JavaScript function or automatically by JavaScript itself. For instance, alert automatically converts any value to a string to show it, mathematical operations convert values to numbers and so on. Converting Numbers to Strings The global method String() can convert numbers to strings and can be used on any type of numbers, literals, variables, or expressions: String(a); // returns a string from a number variable a String(100); // returns a string from a number literal 100 String(20 + 30); // returns a string from a number from an expression We can also call the String(value) function to convert a value to a string: let value = true; console.log(typeof value); // boolean value = String(value); // now value is a string "true" console.log(typeof value); // string Usually, string conversion is obvious: a false becomes "false", null becomes "null" and so on. Converting Booleans to Strings Boolean conversion is the most simple, which usually happens in logical operations but can also be performed with a call to Boolean(value). The global method String() has an ability to convert booleans to strings. String(false); // returns "false" String(true); // returns "true" Here is some conversion rule: The values which are intuitively “empty” (0, an empty string, null, undefined, and NaN) become false, other values become true. For example: console.log( Boolean(1) ); // true console.log( Boolean(0) ); // false console.log( Boolean("hello") ); // true console.log( Boolean("") ); // false console.log( Boolean("0") ); // true console.log( Boolean(" ") ); // spaces, also true (any non-empty string is true) Converting Dates to Strings The global method String() converts dates to strings. console.log(String(Date())); // returns "Tue Jan 21 2020 10:48:16 GMT+0200 (W. Europe Daylight Time)" Converting Strings to Numbers The global method Number() converts strings to numbers. Strings containing numbers (like "5.20") convert to numbers (like 5.20). Empty strings convert to 0. Anything else converts to NaN. Number("5.20"); // returns 5.20 Number(" "); // returns 0 Number(""); // returns 0 Number("99 88"); // returns NaN We meet numeric conversion in mathematical functions and expressions automatically. Example can be a division (/), which is applied to non-numbers: console.log( "6" / "2" ); // 3, strings are converted to numbers Number(value) function helps us explicitly convert a value to a number: let str = "123"; console.log(typeof str); // string let num = Number(str); // becomes a number 123 console.log(typeof num); // number Explicit conversion is required when we read a value from a string-based source as a text form which expects a number to be entered.
https://www.w3docs.com/learn-javascript/javascript-data-types.html
2020-03-28T20:52:22
CC-MAIN-2020-16
1585370493120.15
[]
www.w3docs.com
Object to Primitive Conversion Now it’s time to find out what will happen if you add objects obj1 + obj2, subtract obj1 - obj2, or print using alert(obj). In such a case, objects will be auto-converted to primitives, after which the operation will be carried out. Here are the main rules for numeric, string, and boolean conversions of objects: - Overall objects are considered true in a boolean conversion. There are merely string and numeric conversions. - The numeric conversions take place when subtracting objects or applying mathematical functions. - String conversion happens when we output an object like alert(obj) and in contexts like that. ToPrimitive¶ It is possible to enhance string and numeric conversion. To meet that goal, you need to use unique object methods. We observe three variants of type conversion, also known as “hints” that are described in specification: "string" In case of an object-to-string conversion, while operating on an object which expects a string, like alert: // output alert(obj); // using object as a property key anotherObj[obj] = 120; "number" In case of an object-to-number conversion, like when you are doing mathematics: // explicit conversion let num = Number(obj); // maths (except binary plus) let n = +obj; // unary plus let delta = obj1 - obj2; // less/greater comparison let greater = obj1 > obj2; "default" It happens rarely, in case the operator doesn’t certainly know which type to expect. For example, the binary + will work with both numbers and strings. If the binary + has an object as an argument, it will use the "default" for converting it. When you compare object using == with a symbol, number, or a string, it is not clear which conversion is better to do. That’s why it is better to use the "default" hint. // binary plus uses the "default" hint let total = obj1 + obj2; // obj == number uses the "default" hint if (car == 1) { ... }; The greater and less comparison operators <, > work with numbers and strings as well. But, note that in this case, the "number" hint is used, not the "default", as in the previous examples. Anyway, in practice, you don’t need to remember all these details, almost all built-in objects ( the exception is the Date object ). For implementing the conversion, JavaScript needs to find and invoke 3 object methods. They are as follows: - Call obj[Symbol.toPrimitive](hint) - a method including a symbolic key Symbol.toPrimitive. If there are such methods, - In other cases, when the hint is "string", keep on trying obj.toString() and obj.valueOf(). - If the hint is "default" or "number", keep on trying obj.valueOf() and obj.toString(). Symbol.toPrimitive¶ The first thing that you should know is that a built-in method exists known as Symbol.toPrimitive. In general, you can use it for naming your conversion method as follows: obj[Symbol.toPrimitive] = function (hint) { // must return a primitive value // hint = one of "string", "number", "default" }; In the following example, car object applies it: let car = { name: "BMW", price: 30000, [Symbol.toPrimitive](hint) { console.log(`hint: ${hint}`); return hint == "string" ? `{name: "${this.name}"}` : this.price; } }; // conversions demo: console.log(car); // hint: string -> {name: "BMW"} console.log(+car); // hint: number -> 30000 console.log(car + 5000); // hint: default -> 35000 toString/valueOf¶ Now it’s time to learn about toString and valueOf methods. Don’t get surprised to find out that they are not considered as symbols. They are among the most ancient methods. The primary purpose of these methods is to provide an “ancient-style” way to run the conversion. In the event of not existing Symbol.toPrimitive JavaScript will try to find them in the following sequence: - toString -> valueOf in case of “string” hint. - valueOf -> toString in other cases The methods mentioned above will bring a primitive value. Returning an object by valueOf or toString means that it is ignored. An object includes toString and valueOf methods as follows: - "[object Object]" is returned by the method of toString. - The object itself will be returned by the valueOf method. Let’s have a look at this case:< /p> let site = { name: "W3Docs" }; console.log(site); // [object Object] console.log(site.valueOf() === site); // true Therefore, anytime using an object as a string, you will have [object Object]. The Types of Return¶ First and foremost, you need to note that primitive-conversion methods never return the primitive that was hinted. You can’t control whether toString method returns a string or Symbol.toPrimitive returns "number". One thing is compulsory: the methods mentioned above have to return a primitive and never an object. Further Conversions¶ You have already learned that a wide range of operators and functions implement type conversions. For example, multiplying * will convert operands into numbers. In the event of passing an object as an argument, two stages can be distinguished: - The object has been converted to a primitive. - In case the resulting primitive isn’t of the proper type, it is converted. Let’s have a look at this example: let obj = { //toString is capable of handling all conversions in the absence of different methods toString() { return "2"; } }; console.log(obj * 3); // 6, the object is converted to primitive "2", after which a number is made by multiplication Here, as the first step, the object is converted to a primitive by the multiplication obj * 3. Afterward, "2" * 3 is transformed into 2 * 3. Strings will be concatenated in the same condition by the binary plus as it accepts a string. Here is an example: let obj = { toString() { return "2"; } }; console.log(obj + 3); // 23 ("2" + 3), the object is converted to primitive returned a string => concatenation Summary¶ The conversion from object to primitive can be invoked automatically by a range of built-in functions and operators expecting primitive as a value. It has the following three hints: - "string" used for alert as well as other operations that require a string; - "number" (used for mathematics) - "default"( not many operators) The algorithm of the conversion is as follows: - Run obj[Symbol.toPrimitive](hint) in case there is a method - In other cases, when the hint is "string" . run obj.toString() and obj.valueOf(), whichever exists - In case the hint is "default"or "number" . run obj.valueOf() and obj.toString(), whichever exists.
https://www.w3docs.com/learn-javascript/object-to-primitive-conversion.html
2020-03-28T20:34:18
CC-MAIN-2020-16
1585370493120.15
[]
www.w3docs.com
Introduction EMC CLARiiON leads the midrange storage market in providing customers with cost-effective storage solutions that deliver the highest levels of performance, functionality, and reliability. CLARiion is ideal for today's mid-sized enterprises as it can scale system capacity and performance, simplify management, and can protect critical applications and data. This implies that even the slightest of deficiencies in the performance of the server if not detected promptly and resolved quickly, can result in irredeemable loss of critical data. To avoid such an adversity, the EMC CLARiiON storage solution should be monitored 24 x 7. proactively alerts administrators to issues in its overall performance and its critical operations, so that the holes are plugged before any data loss occurs. eG Enterprise helps administrators in this regard.
https://docs.eginnovations.com/EMC_Clariion/Introduction_to_EMC_Clariion_Monitoring.htm
2020-03-28T19:58:22
CC-MAIN-2020-16
1585370493120.15
[array(['../Resources/Images/start-free-trial.jpg', None], dtype=object)]
docs.eginnovations.com
8.5.107.03 Genesys Pulse Release Notes Contents Helpful Links Releases Info Product Documentation Genesys Products What's New - Pulse now includes the ability to filter rows in the Grid Widget and Data View by column to display the current agent status even if the Show Agent State Iconis enabled for this statistic. - A link to GAX is no longer displayed for users who have only Pulse privileges. - You can now pin the Namecolumn in the Grid Widget as defined by the Display Options. - The Namecolumn is now always pinned in the Data View. - Pulse now provides an Alert Widget template. It allows you to create and configure an Alert widget to monitor all widgets alerts from a single panel, which provides a notification center for multiple dashboards. - You can propagate template changes to user's widgets without recreating the existing widgets from scratch. Pulse asks you to propagate changes when you save updates to the template if there are any widgets based on that template. - A dark theme is now available for the Wallboard in fullscreen mode. - Pulse now supports working in multiple data centers with traffic minimized between sites for disaster recovery purposes. - The transpose mode is now added to the Line Chart view and for Line Widget. - A new pull-collectorfolder, for internal use only, is now included in the installation folder. Resolved Issues This release contains the following resolved issues: Pulse no longer has memory leakage when you switch between tabs. (WBRT-8661) Autoplay functionality in KPI and Donut widgets now continues to function when you switch between pages. (WBRT-8520) Pulse now correctly displays statistics with different formats and values on line widgets. Previously, the statistics might have been incorrectly scaled. (WBRT-8355) Pulse now requires a minimum version 8 of JRE. (WBRT-8322) All Pulse tabs and windows within the same browser log out and in simultaneously, which prevents you from creating malfunctioning or losing widgets or tabs. (WBRT-8596) Scroll bars in Grid Widget and Data View have an updated style are now hidden when they are not needed. (WBRT-6881) The Callback template is no longer provided, because it is only supported in Genesys Cloud. (WBRT-8511) You can now successfully save template changes after a series of edits including renaming and overwriting the template. Previously in this scenario, Pulse displayed a TEMPLATE_NOT_FOUND message. (WBRT-8502) Your changes to the Grid widget settings are now preserved after you edit the widget. Previously, settings such as column width and filter were reset to their default values. (WBRT-8304) You can no longer perform any actions while the Pulse DB is unavailable. Previously when you performed actions, you might lose widgets and tabs that you previously created. (WBRT-8424) When Configuration Server is stopped, Pulse now shows a Service unavailable message after you reload a page. Previously in this scenario, Pulse displayed an empty screen. (WBRT-8625) Upgrade Notes Refer to the Deployment Procedure for this release to deploy the installation package within your environment. Supported Languages See Release 8.5.1 Translation Support. Feedback Comment on this article:
https://docs.genesys.com/Documentation/RN/8.5.x/pulse85rn/pulse8510703
2020-03-28T20:50:30
CC-MAIN-2020-16
1585370493120.15
[]
docs.genesys.com
Indoor Clotheslines FAQ for indoor clotheslines and indoor clothes airer products - Custom Made Clotheslines - Is Hills Portable 120 Clothesline easy to fold up and store? - Instahanger Quikcloset Indoor clothesline - Instahanger Clothes Airer AH12M Product Video - Instahanger Over Door Clothes Airer AH12MB Product Video - Can I get a special colour for my Austral Indoor Outdoor clothesline? - How much does the Hills Portable 170 and Hills Portable 120 weigh? - Will the Hills Portable 170 Clothesline hold my bed sheets? - How easy is the Hills Portable 170 Clothesline to assemble? - Does the Hills Portable 170 Clothesline fold up flat for storage and if so how do I do that? - What is the best height for installing a Austral Indoor Outdoor clothesline? - How much does the Hills Portable 170 Clothesline weigh? - What does ‘Standard’ Austral Indoor Outdoor clothesline ground mount kit mean? - Does the Instahanger come with a warranty? - How much line space do I get with either model? - What does ‘Plated’ Austral Indoor Outdoor Clothesline ground mount kit mean? - Can I install a Austral Indoor Outdoor clothesline on a fence? - Can I install a Austral Indoor Outdoor Clothesline on a wall other than brick? - Can I install the Austral Indoor Outdoor clothesline in the ground instead of on the wall? - Can I make the width of my Austral Indoor Outdoor clothesline smaller?
https://docs.lifestyleclotheslines.com.au/category/69-indoor-clotheslines
2020-03-28T20:27:07
CC-MAIN-2020-16
1585370493120.15
[]
docs.lifestyleclotheslines.com.au
Galaxy DNA-Seq Tutorial From UABgrid Documentation Revision as of 16:05, -.
https://docs.uabgrid.uab.edu/tgw/index.php?title=Galaxy_DNA-Seq_Tutorial&direction=prev&oldid=3185
2020-03-28T22:10:23
CC-MAIN-2020-16
1585370493120.15
[array(['/tgw/images/2/26/CMV_VAC_WR_3_2_base_quality.jpg', 'CMV VAC WR 3 2 base quality.jpg'], dtype=object) ]
docs.uabgrid.uab.edu
- Product - Misc Product Documents - Security Product Security Product security allows you to restrict visibility and/or the ability to purchase a wine or product to specific contact types or club levels (club members). Product security must be set on a per product basis under the Manage Security section of each product. Products are not secured by default. 1. Go to Manage Security > Click Edit. 2. Check the Product Security checkbox. 3. Now you can make the necessary adjustments for your security on the product. 4. When complete press Save. By default, secured products will be secured for admin panel orders as well. There is an option to allow admin panel users the ability to create orders for customers who are not part of the club level or contact type. This option must be added by WineDirect. If you would like this functionality added please contact [email protected] and a WineDirect support representative would be happy to assist you further.
https://docs.winedirect.com/Product/Misc-Product-Documents/Security
2020-03-28T20:46:34
CC-MAIN-2020-16
1585370493120.15
[array(['https://documentation.vin65.com/assets/client/Image/Store/products/New-Product/product-security690.png', None], dtype=object) ]
docs.winedirect.com
Infrastructure Workflow - Creating Stack Templates Updated by Agile Stacks Infrastructure Workflow - Creating Stack Templates Stack Templates are the cornerstone of reproducible and composable stacks (clusters). You can learn more about the principal parts of the AgileStacks' system here. Stack templates are automatically generated and saved in Git repository, allowing you to manage, configure, and implement change control for your infrastructure as code. In this short tutorial, we'll walk you through the Stack Template creation and deployment process. - Login into the SuperHub UI (controlplane.agilestacks.io) and click the Templatestab - then click Createlink - Next, you will need to name the stack template; it needs to be unique across all environments.Note: When naming a stack, try to choose a meaningful name that includes the stack type or purpose, such as ML Stack or Innovation Lab Stack. You can also use tags to give additional context to your Stack Template such as the date and author. - You need to choose if this will be a Platform Stack or an Overlay. Platform Stacks deploy a Kubernetes cluster, and form the foundational stack of a deployment. Overlay Stacks are couplings of integrated components, and can be added later to a Platform Stack. Any number of different overlay stacks can be added to an existing Stack. - Choose from a list of supported stack components: Kubernetes, Harbor, Jenkins, GitLab, Clair, Prometheus, and more. Functional areas of components are grouped. - Select the required stack components. Sometimes components are unavailable because they are incompatible with other choices made, either because of a selection from a previous functional area or it cannot coexist with a component in the selected functional area. - Click on the gear iconon the top right of each component tile to see a detailed component description or to enter advanced configuration options. - At the end template creation page, there is a choice to Save for laterwhich creates the stack template for peer review or allow editing of the auto-generated files in the git repository - Create an automation template in a Git repository without deploying it Build it nowwhich creates and immediately deploy a stack to a Cloud Account - When ready to deploy the stack template, click Build it nowand follow the instructions to deploy a stack instance from that template - Choose the Environment from the drop-down list where the template is deployed (ie: the cloud account) - Under "Domain name" select Click to editto enter the value for this template to reference, using only valid DNS characters. - lower case characters - numbers - or the dash - symbol - Depending on the Template and Cloud location the capacity configurations for a Platform stack may need to be adjusted - Here is an AWS view - After tweaking the settings to your liking, click deploy, and the view will change to a live log output of your deployment. You can close this tab or navigate away and the deployment will continue. If you deployed a Platform Stack, the process should complete in about 20 minutes. - While waiting for the stack to deploy, you can review the generated infrastructure as code. First open Templates -> Listsection in a navigation bar of AgileStacks control plane. Find the stack template you have just created. URLin Stack Template source codesection and then press Copy URL. The URL you see will be copied in your clipboard. Now, open a terminal with your favorite shell, and clone a git repository from your exchange buffer: git clone can view infrastructure as code definition of the stack template, based on Terraform, CloudFormation, Helm and other languages. Congratulations, you have just created and deployed a new stack template! Like what you see? Sign up today!
https://docs.agilestacks.com/article/zsp052edkw-create-stack-template-and-deploy-stack
2020-03-28T20:32:50
CC-MAIN-2020-16
1585370493120.15
[array(['https://files.helpdocs.io/5dz7tj1wpg/articles/zsp052edkw/1576017979931/image.png', None], dtype=object) array(['https://files.helpdocs.io/5dz7tj1wpg/articles/zsp052edkw/1576018053100/image.png', None], dtype=object) array(['https://files.helpdocs.io/5dz7tj1wpg/articles/zsp052edkw/1576019000066/image.png', None], dtype=object) array(['https://files.helpdocs.io/5dz7tj1wpg/articles/zsp052edkw/1576018836415/image.png', None], dtype=object) array(['https://files.helpdocs.io/5dz7tj1wpg/articles/zsp052edkw/1576019182461/image.png', None], dtype=object) array(['https://files.helpdocs.io/5dz7tj1wpg/articles/zsp052edkw/1580251019809/request-demo.png', None], dtype=object) ]
docs.agilestacks.com
It is important to regularly update your Maltego Client to the latest version. Ideally, this should be done online by following the instructions set out in section 1. below. However, a method to update the Client offline is available. Please refer to section 2. below should you need to perform an offline update. 1. Updating the Client online: Ideally, the Client should be updated online. This can be done by: - Clicking the Application Button (circle in top left corner) - Hovering over Tools - Clicking Check for Updates The steps to perform an online update. 2. Updating the Client offline: To perform an offline update on the Client, extract the .zip file containing the updates locally from this URL: Run the applicable command: On Windows: (Change "C:/path/to" to the folder where the updates were extracted) maltego -ufile:/C:/path/to/updates.xml.gz On Linux/Mac: (Change "/path/to" to the folder where the updates were extracted) maltego -ufile:/path/to/updates.xml.gz
https://docs.maltego.com/support/solutions/articles/15000008827-updating-the-maltego-desktop-client
2020-03-28T20:11:33
CC-MAIN-2020-16
1585370493120.15
[array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/2015007071562/original/qUP-xX8aXjJNUVQdO61P6FsvWGUHQmJe0w.png?1545225382', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15003754585/original/NNh8CtNQABNqYE4QfYiLB0FXJGjJdGcFmg.png?1526476378', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/2015007071527/original/KxleQ342gi9r-sDBZqWpH2yNUNBcC6uzmQ.png?1545225327', None], dtype=object) ]
docs.maltego.com
audioRoutingGroup resource type Namespace: microsoft.graph Important APIs under the /beta version in Microsoft Graph are subject to change. Use of these APIs in production applications is not supported. The audio routing group stores a private audio route between participants in a multiparty conversation. Source is the participant itself and the receivers are a subset of other participants in the multiparty conversation. Note: ConfigureMixer does not involve any routes, it is for the entire call for setting the volume levels for source-receiver combinations. Methods Properties Note: Routing mode determines the restrictions on the sources and receivers. Only the following routing groups are supported. - oneToOne- sources and receivers have only one participant each. - multicast- source has one participant but there are multiple receivers. Receivers list may be updated. Note: If you create many audio routing groups (e.g. a bot per participant), only the audio of the top 4 dominant speakers is forwarded. It means even with customized audio routing group, if the speaker is not loud enough in the main mixer, he/she cannot be heard by the bot even if there is a private audio group just for this speaker and the bot. Relationships None JSON representation The following is a JSON representation of the resource. { "id": "string (identifier)", "receivers": [ "string" ], "routingMode": "oneToOne | multicast", "sources": [ "string" ] } Feedback
https://docs.microsoft.com/en-us/graph/api/resources/audioroutinggroup?view=graph-rest-beta
2020-03-28T22:16:49
CC-MAIN-2020-16
1585370493120.15
[]
docs.microsoft.com
File and Partition Sensors¶ Sensors are a certain type of operator that keep running until a certain criterion is met. Qubole supports: - File sensor - It monitors a file’s availability at its location. - Hive partition sensor - It monitors the partition’s availability on its parent Hive table. It also monitors the different columns in the Hive table. Currently, Qubole provides only API support to create file and partition sensors. For more information, see sensor-api-index. Airflow uses file and partition sensors for programmatically monitoring workflows.
https://docs.qubole.com/en/latest/user-guide/file-partition-sensors.html
2020-03-28T21:14:47
CC-MAIN-2020-16
1585370493120.15
[]
docs.qubole.com
About the NetApp Data Availability Services app display windows The NetApp Data Availability Services (NDAS) graphical management interface includes several windows in which you can manage backup and restore workflows, and monitor the current status of data protection relationships. Note: In NDAS app displays, "cloud target" and "protected to cloud" can refer to the AWS public cloud or a StorageGRID private cloud, depending on the solution you configured. Dashboard Provides an overview of the data protection environment, including recent activity, summary of target and cloud backups, and alerts. If no disk target and cloud targets are added yet, an option to add them is provided. Data Protection Shows the backup status of all volumes in the peered primary clusters. Clicking on individual rows or tiles displays detailed status for that volume. Three shield icons indicate: Local snapshots – no target or cloud workflow, only Snapshot copies of the volume Protect to disk – disk-to-disk back up to target cluster Protect to cloud – complete disk-disk-cloud in place If a volume has no green shields, no default Snapshot policy is in place. NetApp Data Availability Services applies its default Snapshot policy as part of the protection workflow. Additional status messages show activities in progress and deleted volumes. Allows data protection workflows to be extended from source to target, and target to cloud. Display options: View; display only desired data protection type Group by; sort by Cluster, Size or Protection type Toggle between tabular and tile displays Targets Shows ONTAP disk targets and cloud targets that have already been configured. Allows a new target to be added. Restore Allows searches of the catalog database by volume, LUN or file name. From the search results, allows volumes, LUNs, and single files from a given Snapshot copy to be restored. Provides search and filter by volume options. Policies Shows data protection policies currently available from NetApp Data Availability Services.Note: NDAS applies its own policies for any relationships - local snapshot, SnapMirror to disk, and cloud backup - that it creates on the primary (source) and secondary (target) clusters. These policies begin with the string NdasDefault. Do not delete the default NDAS policies (identified by the beginning string NdasDefault) on the source or target clusters. If they are deleted, NDAS backups will stop. SnapMirror, Snapshot and cloud backup policies cannot be created, modified, or deleted in the NDAS app. Activity Shows details about recent job activity and alerts (notices are purged after 7 days). Cloud Services (1.1.4 and later) Provides Active Data Connector (ADC) for direct read-only access to the S3 bucket data. Settings ( in the upper right of the app) Displays settings and controls for: Version The NDAS app version and available upgrades Support NetApp Serial Number, AWS component ID numbers, and Support Bundle download (including logs for diagnostic purposes) NetApp Active IQ Current status for automatically sending AutoSupport messages to NetApp Support User Admin user name and password change facility AWS Settings Current AWS ID and change facility License Current information — including ID, type, capacity, % used, and expiration date — and an add/renew facility
https://docs.netapp.com/us-en/netapp-data-availability-services/concept_about_display_windows.html
2020-03-28T21:42:10
CC-MAIN-2020-16
1585370493120.15
[]
docs.netapp.com
Most modern webpages are dynamically adding elements or data to the DOM by the use of Javascript. Some even are single-page-applications (SPA) build with React.js, Vue.js, Angular, etc - that in turn means nearly the whole DOM is rendered by Javascript that has to be interpreted by a Javascript engine (usually your Browser). Skrape{it}'s default mode is making simple HTTP requests - and thereby it's not possible to scrape JS driven websites in default mode. Request option mode = Mode.DOM for the win! When using skrape{it}'s DOM mode it emulates to be a real browser, executes the pages Javascript and returns a parsed representation of a website. It supports parsing of client side rendered markup (thereby it's possible to scrape or parse websites that uses React.js, Vue.js, Angular, jQuery and so on). The DOM mode will use the latest Chrome engine to render the page, therefore it provides support for ES6 and modern Javascript features. 💡 Keep in mind: Because of the browser emulation it is not as fast as the default mode! Let's assume a pretty basic scenario. We want to make a request to a website that is rendering data via Javascript. For instance it's markup could look like this - that is adding an extra div element including some text. Example Markup that renders elements via Javascipt<!DOCTYPE html><html lang="en"><head><title>i'm the title</title></head><body>i'm the body<h1>i'm the headline</h1><p>i'm a paragraph</p><p>i'm a second paragraph</p></body><script>var dynamicallyAddedElement = document.createElement("div");dynamicallyAddedElement.className = "dynamic";var textNode = document.createTextNode("I have been dynamically added via Javascript");dynamicallyAddedElement.appendChild(textNode);document.body.appendChild(dynamicallyAddedElement);</script></html> How to scrape client-side rendered DOM elementsfun main() {val scrapedData = skrape {url = ""mode = Mode.DOM // <--- here's the magicextract {element("div.dynamic").text()}}println(scrapedData)} Will print extracted data to the console> I have been dynamically added via Javascript
https://docs.skrape.it/docs/dsl/extract-client-side-rendered-data
2020-03-28T20:23:28
CC-MAIN-2020-16
1585370493120.15
[]
docs.skrape.it