content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
69. Signing Scripts¶ Note The below information is extensively based in information taken from the PowerShell® Notes for Professionals book. I plan to extend this information based on my day to day usage of the language. 69.1: Signing a script¶ Signing a script is done by using the Set-AuthenticodeSignature -cmdlet and a code-signing certificate. You can also read a certificate from a .pfx-file using: The script will be valid until the certificate expires. If you use a timestamp-server during the signing, the script will continue to be valid after the certificate expires. It is also useful to add the trust chain for the certificate (including root authority) to help most computers trust the certificated used to sign the script. It's recommended to use a timestamp-server from a trusted certificate provider like Verisign, Comodo, Thawte etc. 69.2: Bypassing execution policy for a single script¶ Often you might need to execute an unsigned script that doesn't comply with the current execution policy. An easy way to do this is by bypassing the execution policy for that single process. Example: Or you can use the shorthand: Other Execution Policies: - AllSigned : Only scripts signed by a trusted publisher can be run. - Bypass : No restrictions; all Windows PowerShell scripts can be run. - Default : Normally RemoteSigned, but is controlled via ActiveDirectory - RemoteSigned : Downloaded scripts must be signed by a trusted publisher before they can be run. - Restricted : No scripts can be run. Windows PowerShell can be used only in interactive mode. - Undefined : NA - Unrestricted : Similar to bypass Unrestricted Caveat If you run an unsigned script that was downloaded from the Internet, you are prompted for permission before it runs. 69.3: Changing the execution policy using Set-ExecutionPolicy¶ To change the execution policy for the default scope (LocalMachine), use: To change the policy for a specific scope, use: You can suppress the prompts by adding the -Force switch. 69.4: Get the current execution policy¶ Getting the effective execution policy for the current session: List all effective execution policies for the current session: List the execution policy for a specific scope, ex. process: 69.5: Getting the signature from a signed script¶ Get information about the Authenticode signature from a signed script by using the Get-AuthenticodeSignature cmdlet: 69.6: Creating a self-signed code signing certificate for testing¶ When signing personal scripts or when testing code signing it can be useful to create a self-signed code signing certificate. Beginning with PowerShell 5.0 you can generate a self-signed code signing certificate by using the New-SelfSignedCertificate cmdlet: In earlier versions, you can create a self-signed certificate using the makecert.exe tool found in the .NET Framework SDK and Windows SDK. A self-signed certificate will only be trusted by computers that have installed the certificate. For scripts that will be shared, a certificate from a trusted certificate authority (internal or trusted third-party) are recommended.
https://docs.itops.pt/Powershell/Tutorial/69.%20Signing%20Scripts/
2020-01-18T04:02:02
CC-MAIN-2020-05
1579250591763.20
[]
docs.itops.pt
This documentation is for a previous release of Cloud Manager. Go to the docs for the latest release. Networking requirements for Cloud Manager Edit on GitHub Request doc changes You must set up your networking so that Cloud Manager can deploy Cloud Volumes ONTAP systems in AWS or in Microsoft Azure. The most important step is ensuring outbound internet access to various endpoints. Connection to target networks Cloud Manager requires a network connection to the AWS VPCs and Azure VNets in which you want to deploy Cloud Volumes ONTAP. For example, if you install Cloud Manager in your corporate network, then you must set up a VPN connection to the AWS VPC or Azure VNet in which you launch Cloud Volumes ONTAP. Outbound internet access Cloud Manager requires outbound internet access to deploy and manage Cloud Volumes ONTAP. Outbound internet access is also required when accessing Cloud Manager from your web browser and when running the Cloud Manager installer on a Linux host. The following sections identify the specific endpoints. - Outbound internet access to manage Cloud Volumes ONTAP in AWS Cloud Manager requires outbound internet access to contact the following endpoints when deploying and managing Cloud Volumes ONTAP in AWS: - Outbound internet access to manage Cloud Volumes ONTAP in Azure Cloud Manager requires outbound internet access to contact the following endpoints when deploying and managing Cloud Volumes ONTAP in Microsoft Azure: - Outbound internet access from your web browser Users must access Cloud Manager from a web browser. The machine running the web browser must have connections to the following endpoints: - Outbound internet access to install Cloud Manager on a Linux host The Cloud Manager installer must access the following URLs during the installation process: Ports and security groups If you deploy Cloud Manager from Cloud Central or from the marketplace images, refer to the following: If you install Cloud Manager on an existing Linux host, see Cloud Manager host requirements.
https://docs.netapp.com/us-en/occm36/reference_networking_cloud_manager.html
2020-01-18T03:08:47
CC-MAIN-2020-05
1579250591763.20
[]
docs.netapp.com
Table of Contents Product Index Footsteps like an earthquake… a shout like thunder… and a roar like a hurricane! Giants have emerged from the corners of the world! Whether the children of the titans, cosmic mutations, or simply alien species, these massive beings are ready to assail the fortresses of your heroes, or perhaps even wait on top of a cloud at the peak of a beanst.
http://docs.daz3d.com/doku.php/public/read_me/index/36739/start
2020-01-18T04:04:55
CC-MAIN-2020-05
1579250591763.20
[]
docs.daz3d.com
Class to manage vector graphics importers. More... #include <graphics_import_mgr.h> Class to manage vector graphics importers. Definition at line 38 of file graphics_import_mgr.h. Definition at line 48 of file graphics_import_mgr.h. List of handled file types. Definition at line 42 of file graphics_import_mgr.h. Construct an import plugin manager, with a specified list of filetypes that are not permitted (can be used for when some file type support is not available due to config or other reasons) Definition at line 32 of file graphics_import_mgr.cpp. References DXF, m_importableTypes, and SVG. Vector containing all GFX_FILE_T values that can be imported. Definition at line 58 of file graphics_import_mgr.h. References m_importableTypes. Referenced by GetPluginByExt(). Returns a plugin instance for a specific file type. Definition at line 48 of file graphics_import_mgr.cpp. Referenced by GetPluginByExt(). Returns a plugin that handles a specific file extension. Definition at line 65 of file graphics_import_mgr.cpp. References compareFileExtensions(), GetImportableFileTypes(), and GetPlugin(). Definition at line 70 of file graphics_import_mgr.h. Referenced by GetImportableFileTypes(), and GRAPHICS_IMPORT_MGR().
https://docs.kicad-pcb.org/doxygen/classGRAPHICS__IMPORT__MGR.html
2020-01-18T03:26:26
CC-MAIN-2020-05
1579250591763.20
[]
docs.kicad-pcb.org
January 2017 Volume 32 Number 1 [HoloLens] Introduction to the HoloLens, Part 2: Spatial Mapping By Adam Tuliper | January 2017 In my last article, I talked about the three pillars of input for the HoloLens—gaze, gesture and voice (msdn.com/magazine/mt788624). These constructs allow you to physically interact with the HoloLens and, in turn, the world around you. You’re not constrained to working only with them, however, because you can access information about your surroundings through a feature called spatial mapping, and that’s what I’m going to explore in this article. If I had to choose a single favorite feature on the HoloLens, it would be spatial mapping. Spatial mapping allows you to understand the space around you, either explicitly or implicitly. I can explicitly choose to work with the information taken in, or I can proceed implicitly by allowing natural physical interactions, like dropping a virtual ball onto a physical table, to take place. Recently, with some really neat updates to the HoloToolkit from Asobo Studio, it’s easy to scan for features in your environment, such as a chair, walls and more. What Is a 3D Model? It might be helpful to understand what a 3D model is before looking at what a spatial map of your area represents. 3D models come in a number of file formats, such as .ma or .blender, but often you’ll find them in either of two proprietary Autodesk formats called .FBX (Filmbox) or .OBJ files. .FBX files can contain not only 3D model information, but also animation data, though that isn’t applicable to this discussion. A 3D model is a fairly simple object, commonly tracked via face-vertex meshes, which means tracking faces and vertices. For nearly all modern hardware, triangles are used for faces because triangles are the simplest of polygons. Inside a 3D model you’ll find a list of all vertices in the model (made up of x,y,z values in space); a list of the vertex indices that make up each triangle; normals, which are just descriptive vectors (arrows) coming off each vertex used for lighting calculations so you know how light should interact with your model; and, finally, UV coordinates—essentially X,Y coordinates that tell you how to take a 2D image, called a texture, and wrap it around your model like wrapping paper to make it look like it was designed. Figure 1 shows virtual Adam, a model that the company xxArray created for me because, well, I wanted to put myself into a scene with zombies. This is just a 3D model, but note the legs, which are made of vertices and triangles, and that the pants texture is, in simple terms, wrapped around the 3D model of the legs to look like pants. That’s nearly all the magic behind a 3D model. Figure 1 UV Mapping of 2D Texture to 3D Object .png) What Does Spatial Mapping Look Like? Spatial mapping is easier in some ways because you’re not dealing with the textures of your environment. All you typically care about is having a fairly accurate mesh created from your environment that can be discovered. The environment is scanned so you can interact with it. Figure 2 shows a scenario slightly more like what you’ll actually get, though contrived. The model on the left shows the vertices, triangles and normals. You can’t see the normal directly, of course, but you see its result by how the object is shaded. Figure 2 What’s Needed for Rendering and for the Physics Engine .png) What you’ve seen thus far in both 3D model scenarios is purely for rendering and has absolutely nothing to do (yet) with physics. The green box outline on the right in Figure 2 is the shape of the collider I’ve moved off the cube to show a point; this is the component that defines the region to the physics system. If you want to fully interact with the world on the HoloLens, a game or in any 3D experience, really, you need a collider for the physics system to use. When you turn the HoloLens on and are in the holographic shell, it’s always mapping your environment. The HoloLens does this to understand where to place your windows. If I walk around my house with the HoloLens, it’s always updating its information about my environment. This serves two purposes: First, when I walk into a room I’ve been in previously, the HoloLens should show me the windows I had open. Second, environments are always changing and it needs to detect those changes. Think of the following common scenarios: someone walks in front of me, my kids are running around in the house, our pet bear walks by and creates a large occlusion zone I can’t see through. The point is, the environment is potentially always changing and the HoloLens is looking for these changes. Before delving into the API, let’s see the spatial mapping in practice (and, by the way, I don’t have a real pet bear). To view spatial mapping in action, you can connect to the Windows Device Portal on a HoloLens, which allows remote management and viewing of the device, including a live 30 FPS video stream of what the device sees. The device portal can be run for nearly any Windows 10 device. It can be accessed by going to the device IP, or to 127.0.0.1:10080 for devices plugged in over USB once it’s been enabled on the HoloLens in the Developer Settings. Most Windows 10 devices can be enabled for a device portal as outlined at bit.ly/2f0cnfM. Figure 3 and Figure 4 show the spatial mesh retrieved from the 3D view in the device portal. Figure 3 shows what the HoloLens sees as soon as I turn it on, while Figure 4 displays the view after a brief walk through my living room. Note the chair next to the far wall on the right, as that appears later on (in Figure 9) when I ask the spatial understanding library to find me a sittable surface. Figure 3 HoloLens Spatial Mesh Right After HoloLens Is Turned on in a New Room .png) Figure 4 HoloLens Spatial Mesh After a Quick Walk-Through a Portion of the Room .png) How Spatial Mapping Works Spatial mapping works via a SurfaceObserver object, as you’re observing surface volumes, watching for new, updated and removed surfaces. All the types you need to work with come with Unity out of the box. You don’t need any additional libraries, though the HoloToolkit-Unity repository on GitHub has lots of functionality for the HoloLens, including some amazing surface detection I’ll look at later, so this repository should be considered essential for hitting the ground running. First, you tell the SurfaceObserver that you’re observing a volume: public Vector3 Extents = new Vector3(10, 10, 10); observer = new SurfaceObserver(); // Start from 0,0,0 and fill in a 10 meter cube volume // as you explore more of that volume area observer.SetVolumeAsAxisAlignedBox(Vector3.zero,Extents); The larger the region, the greater the computational cost that can occur. According to the documentation, spatial mapping scans in a 70-degree cone a region between 0.8 and 3.1 meters—about 10 feet out (the docs state these values might change in the future). If an object is further away, it won’t be scanned until the HoloLens gets closer to it. Keeping to 0.8 meters also ensures the user’s hands won’t accidentally be included as part of the spatial mesh of the room. The process to get spatial data into an application is as follows: - Notify the SurfaceObserver to observe a region of size A and shape B. - At a predefined interval (such as every 3 seconds), ask the SurfaceObserver for an update if you aren’t waiting on other results to be processed. (It’s best not to overlap results; let one mesh finish before the next is processed.) - Surface Observer lets you know if there’s an add, update or removal of a surface volume. - If there’s an add or update to your known spatial mesh: - Clean up old surface if one exists for this id. - Reuse (to save memory, if you have a surface that isn’t being used) or allocate a new SurfaceObject with mesh, collider and world anchor components. - Make an async request to bake the mesh data. - If there’s a removal, remove the volume and make it inactive so you can reuse its game object later (this prevents additional allocations and thus fewer garbage collections). To use spatial mapping, SpatialPerception is a required capability in a Universal Windows Platform (UWP) app. Because an end user should be aware that an application can scan the room, this needs to be noted in the capabilities either in the Unity player settings as shown in Figure 5, or added manually in your application’s package.appxmanifest. Figure 5 Adding SpatialPerception in File-Build Settings .png) The spatial meshes are processed in surface volumes that are different from the bounding volume defined for the SurfaceObserver to observe. The key is once the SurfaceObserver_OnSurface delegate is called to note surface volume changes, you request the changes in the next frame. The meshes are then prepared in a process called baking, and a SurfaceObserver_OnDataReady callback is processed when the mesh is ready. Baking is a standard term in the 3D universe that usually refers to calculating something ahead of time. It’s typically used to talk about calculating lighting information and transferring it to a special image called a lightmap in the baking process. Lightmaps help avoid runtime calculations. Baking a mesh can take several frames from the time you ask for it in your Update function (see Figure 6). For performance’s sake, request the mesh only from RequestMeshAsync if you’re actually going to use it, otherwise you’re doing extra processing when you bake it for no reason. Figure 6 The Update Function private void Update() { // Only do processing if you should be observing. // This is a flag that should be turned on or off. if (ObserverState == ObserverStates.Running) { // If you don't have a mesh creation pending but you could // schedule a mesh creation now, do it! if (surfaceWorkOutstanding == false && surfaceWorkQueue.Count > 0) { SurfaceData surfaceData = surfaceWorkQueue.Dequeue(); // If RequestMeshAsync succeeds, then you've scheduled mesh creation. // OnDataReady is left out of this demo code, as it performs // some basic cleanup and sets some material/shadow settings. surfaceWorkOutstanding = observer.RequestMeshAsync(surfaceData, SurfaceObserver_OnDataReady); } // If you don't have any other work to do, and enough time has passed since // previous update request, request updates for the spatial mapping data. else if (surfaceWorkOutstanding == false && (Time.time - updateTime) >= TimeBetweenUpdates) { // You could choose a new origin here if you need to scan // a new area extending out from the original or make Extents bigger. observer.SetVolumeAsAxisAlignedBox(observerOrigin, Extents); observer.Update(SurfaceObserver_OnSurfaceChanged); updateTime = Time.time; } } } private void SurfaceObserver_OnSurfaceChanged( SurfaceId id, SurfaceChange changeType, Bounds bounds, System.DateTime updateTime) { GameObject surface; switch (changeType) { case SurfaceChange.Added: case SurfaceChange.Updated: // Create (or get existing if updating) object on a custom layer. // This creates the new game object to hold a piece // of the spatial mesh. surface = GetSurfaceObject(id.handle, transform); // Queue the request for mesh data to be handled later. QueueSurfaceDataRequest(id, surface); break; case SurfaceChange.Removed: // Remove surface from list. // ... break; } } The Update code is called every frame on any game object deemed responsible for getting the spatial meshes. When surface volume baking is requested via RequestMeshAsync, the request is passed a SurfaceData structure in which you can specify the scanning density (resolution) in triangles per cubic meter to process. When TrianglesPerCubicMeter is greater than 1000, you get fairly smooth results that more closely match the surfaces you’re scanning. On the other hand, the lower the triangle count, the better the performance. A resolution of <100 is very fast, but you lose surface details, so I recommend trying 500 to start and adjusting from there. Figure 7 uses about 500 TrianglesPerCubicMeter. The HoloLens already does some optimizations on the mesh, so you’ll need to performance test your applications and make a determination whether you want to scan and fix up more (use less memory) or just scan at a higher resolution, which is easier but uses more memory. Figure 7 A Virtual Character Detecting and Sitting on a Real-World Item (from the Fragments Application) .png) Creating the spatial mesh isn’t a super high-resolution process by design because higher resolution equals significantly more processing power and usually isn’t necessary to interact with the world around you. You won’t be using spatial mapping to capture a highly detailed small figurine on your countertop—that’s not what it’s designed for. There are plenty of software solutions for that, though, via a technique called photogrammetry, which can be used for creating 3D models from images, such as Microsoft 3D Builder, and many others listed at bit.ly/2fzcH1z and bit.ly/1UjAt1e. The HoloLens doesn’t include anything for scanning and capturing a textured 3D model, but you can find applications to create 3D models on the HoloLens, such as HoloStudio, or you can create them in 3D Builder (or in any 3D modeling software for that matter) and bring them into Unity to use on the HoloLens. You can also now live stream models from Unity to the HoloLens during development with the new Holographic emulation in Unity 5.5. Mesh colliders in Unity are the least-performant colliders, but they’re necessary for surfaces that don’t fit primitive shapes like boxes and spheres. As you add more triangles on the surfaces and add mesh colliders to them, you can impact physics performance. SurfaceData’s last parameter is whether to bake a collider: SurfaceData surfaceData = new SurfaceData(id, surface.GetComponent<MeshFilter>(), surface.GetComponent<WorldAnchor>(), surface.GetComponent<MeshCollider>(), TrianglesPerCubicMeter, bakeCollider); You may never need a collider on your spatial mesh (and thus pass in bakeCollider=false) if you only want to detect features in the user’s space, but not integrate with the physics system. Choose wisely. There are plenty of considerations for the scanning experience when using spatial mapping. Applications may opt not to scan, to scan only part of the environment or to ask users to scan their environment looking for certain-size surfaces like a couch. Design guidelines are listed on the “Spatial Mapping Design” page of the Windows Dev Center (bit.ly/2gDqQQi) and are important to consider, especially because understating scenarios can introduce various imperfections into your mesh, which fall into three general categories discussed on the “Spatial Mapping Design” page—bias, hallucinations and holes. One workflow would be to ask the user to scan everything up front, such as is done at the beginning of every “RoboRaid” session to find the appropriate surfaces for the game to work with. Once you’ve found applicable surfaces to use, the experience starts and uses the meshes that have been provided. Another workflow is to scan up front, then scan continually at a smaller interval to find real-world changes. Working with the Spatial Mesh Once the mesh has been created, you can interact with it in various ways. If you use the HoloToolkit, the spatial mesh has been created with a custom layer attribute. In Unity you can ignore or include layers in various operations. You can shoot an invisible arrow out in a common operation called a raycast, and it will return the colliders that it hit on the optionally specified layer. Often I’ll want to place holograms in my environment, on a table or, even like in “Young Conker” (bit.ly/2f4Ci4F), provide a location for the character to move to by selecting an area in the real world (via the spatial mesh) to which to go. You need to understand where you can intersect with the physical world. The code in Figure 8 performs a raycast out to 30 meters, but will report back only areas hit on the spatial mapping mesh. Other holograms are ignored if they aren’t on this layer. Figure 8 Performing a Raycast // Do a raycast into the world that will only hit the Spatial Mapping mesh. var headPosition = Camera.main.transform.position; var gazeDirection = Camera.main.transform.forward; RaycastHit hitInfo; // Ensure you specify a length as a best practice. Shorter is better as // performance hit goes up roughly linearly with length. if (Physics.Raycast(headPosition, gazeDirection, out hitInfo, 10.0f, SpatialMappingManager.Instance.LayerMask)) { // Move this object to where the raycast hit the Spatial Mapping mesh. this.transform.position = hitInfo.point; // Rotate this object to face the user. Quaternion rotation = Camera.main.transform.localRotation; rotation.x = 0; rotation.z = 0; transform.rotation = rotation; } I don’t have to use the spatial mesh, of course. If I want a hologram to show up and the user to be able to place it wherever he wants (maybe it always follows him) and it will never integrate with the physical environment, I surely don’t need a raycast or even the mesh collider. Now let’s do something fun with the mesh. I want to try to determine where in my living room an area exists that a character could sit down, much like the scene in Figure 7, which is from “Fragments,” an amazing nearly five-hour mystery-solving experience for the HoloLens that has virtual characters sitting in your room at times. Some of the code I’ll walk through is from the HoloToolkit. It came from Asobo Studio, which worked on “Fragments.” Because this is mixed reality, it’s just plain awesome to develop experiences that mix the real world with the virtual world. Figure 9 is the end result from a HoloToolkit-Examples—SpatialUnderstandingExample scene that I’ve run in my living room. Note that it indicates several locations that were identified as sittable areas. Figure 9 The HoloToolkit SpatialUnderstanding Functionality .jpg) The entire code example for this is in the HoloToolkit, but let’s walk through the process. I’ve trimmed down the code into applicable pieces. (I’ve talked about SurfaceObserver already so that will be excluded from this section.) SpatialUnderstandingSourceMesh wraps the SurfaceObserver through a SpatialMappingObserver class to process meshes and will create the appropriate MeshData objects to pass to the SpatialUnderstaing DLL. The main force of this API lies in this DLL in the HoloToolkit. In order to look for shapes in my spatial mesh using the DLL, I must define the custom shape I’m looking for. If I want a sittable surface that’s between 0.2 and 0.6 meters off the floor, made of at least one discrete flat surface, and a total surface area minimum of 0.2 meters, I can create a shape definition that will get passed to the DLL through AddShape (see Figure 10). Figure 10 Creating a Shape Definition ShapeDefinitions.cs // A "Sittable" space definition..20f), }), }; // Tell the DLL about this shape is called Sittable. AddShape("Sittable", shapeComponents); Next, I can detect the regions and then visualize or place game objects there. I’m not limited to asking for a type of shape and getting all of them. If I want, I can structure my query to QueryTopology_FindLargePositionsOnWalls or QueryTopology_FindLargestWall, as shown in Figure 11. Figure 11 Querying for a Shape SpaceVisualizer.cs (abbreviated) const int QueryResultMaxCount = 512; private ShapeResult[] resultsShape = new ShapeResult[QueryResultMaxCount]; public GameObject Beacon; public void FindSittableLocations() { // Pin managed object memory going to native code. IntPtr resultsShapePtr = SpatialUnderstanding.Instance.UnderstandingDLL. PinObject(resultsShape); // Find the half dimensions of "Sittable" objects via the DLL. int shapeCount = SpatialUnderstandingDllShapes.QueryShape_FindShapeHalfDims( "Sittable", resultsShape.Length, resultsShapePtr); // Process found results. for(int i=0;i<shapeCount;i++) { // Create a beacon at each "sittable" location. Instantiate(Beacon, resultsShape[i].position, Quaternion.identity); // Log the half bounds of our sittable area. Console.WriteLine(resultsShape[i].halfDims.sqrMagnitude < 0.01f) ? new Vector3(0.25f, 0.025f, 0.25f) : resultsShape[i].halfDims) } } There’s also a solver in the HoloToolkit that allows you to provide criteria, such as “Create 1.5 meters away from other objects”: List<ObjectPlacementRule> rules = new List<ObjectPlacementRule>() { ObjectPlacementRule.Create_AwayFromOtherObjects(1.5f), }; // Simplified api for demo purpose – see LevelSolver.cs in the HoloToolkit. var queryResults = Solver_PlaceObject(....) After executing the preceding query to place an object, you get back a list of results you can use to determine the location, bounds and directional vectors to find the orientation of the surface: public class ObjectPlacementResult { public Vector3 Position; public Vector3 HalfDims; public Vector3 Forward; public Vector3 Right; public Vector3 Up; }; Wrapping Up Spatial mapping lets you truly integrate with the world around you and engage in mixed-reality experiences. You can guide a user to scan her environment and then give her feedback about what you’ve found, as well as smartly determine her environment for your holograms to interact with her. There’s no other device like the HoloLens for mixing worlds. Check out HoloLens.com and start developing mind-blowing experiences today. Next time around, I’ll talk about shared experiences on the HoloLens. Until then, keep developing! Adam Tuliper is a senior technical evangelist with Microsoft living in sunny SoCal. He’s a Web dev/game dev Pluralsight.com author and all-around tech lover. Find him on Twitter: @AdamTuliper or at adamtuliper.com. Thanks to the following Microsoft technical expert for reviewing this article:Jackson Fields
https://docs.microsoft.com/en-us/archive/msdn-magazine/2017/january/hololens-introduction-to-the-hololens-part-2-spatial-mapping
2020-01-18T04:03:25
CC-MAIN-2020-05
1579250591763.20
[array(['https://msdn.microsoft.com/en-us/mt745096.Tuliper_Figure%201_hires(en-us,MSDN.10', 'UV Mapping of 2D Texture to 3D Object UV Mapping of 2D Texture to 3D Object'], dtype=object) array(['https://msdn.microsoft.com/en-us/mt745096.Tuliper_Figure%202_hires(en-us,MSDN.10', 'What’s Needed for Rendering and for the Physics Engine What’s Needed for Rendering and for the Physics Engine'], dtype=object) array(['https://msdn.microsoft.com/en-us/mt745096.Tuliper_Figure%203_hires(en-us,MSDN.10', 'HoloLens Spatial Mesh Right After HoloLens Is Turned on in a New Room HoloLens Spatial Mesh Right After HoloLens Is Turned on in a New Room'], dtype=object) array(['https://msdn.microsoft.com/en-us/mt745096.Tuliper_Figure%204_hires(en-us,MSDN.10', 'HoloLens Spatial Mesh After a Quick Walk-Through a Portion of the Room HoloLens Spatial Mesh After a Quick Walk-Through a Portion of the Room'], dtype=object) array(['https://msdn.microsoft.com/en-us/mt745096.Tuliper_Figure%205_hires(en-us,MSDN.10', 'Adding SpatialPerception in File-Build Settings Adding SpatialPerception in File-Build Settings'], dtype=object) array(['https://msdn.microsoft.com/en-us/mt745096.Tuliper_Figure%207_hires(en-us,MSDN.10', 'A Virtual Character Detecting and Sitting on a Real-World Item (from the Fragments Application) A Virtual Character Detecting and Sitting on a Real-World Item (from the Fragments Application)'], dtype=object) array(['https://msdn.microsoft.com/en-us/mt745096.Tuliper_Figure%209(en-us,MSDN.10', 'The HoloToolkit SpatialUnderstanding Functionality The HoloToolkit SpatialUnderstanding Functionality'], dtype=object) ]
docs.microsoft.com
Spatial Regression Models (spreg)¶ spreg, short for “spatial regression,” is a python package to estimate simultaneous autoregressive spatial regression models. These models are useful when modeling processes where observations interact with one another. For more information on these models, consult the Spatial Regression short course by Luc Anselin (Spring, 2017), with the Center for Spatial Data Science at the University of Chicago:
https://spreg.readthedocs.io/en/latest/
2020-01-18T02:38:31
CC-MAIN-2020-05
1579250591763.20
[]
spreg.readthedocs.io
Exchange Data Link Admin /dev/tty2 For SMSI, SMDI, VMS, or ACL, specify the device definition of the serial port on the pSeries computer to which the exchange data link is connected. Specify the device name as configured in SMIT. This is usually /dev/ttyn or an SMSI or VMS exchange data link or /dev/mpqn for an ACL exchange data link, where n is the number of the port used to create the physical exchange data link connection. For a CallPath_SigProc link, specify the server name or Internet Protocol (IP) address of the CallPath Server system. The exact format depends on how your CallPath Server systems are set up.
http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.config.doc/i896523.html
2019-08-17T10:28:48
CC-MAIN-2019-35
1566027312128.3
[]
docs.blueworx.com
About Anypoint Runtime Fabric Anypoint Runtime Fabric is a container service that automates the deployment and orchestration of your Mule applications and gateways. Runtime Fabric runs on customer-managed infrastructure on AWS, Azure, virtual machines (VMs) or bare-metal servers. Some of the capabilities Anypoint Runtime Fabric provides are the following: Isolation between applications by running a separate Mule runtime per application. The ability to run multiple versions of the Mule runtime on the same set of resources. Scaling applications across multiple replicas. Automated application fail-over capabilities. Application management with Anypoint Runtime Manager. Architecture Anypoint Runtime Fabric is composed of a set of VMs, each serving as one of the following roles: Controller - VMs dedicated for operating Runtime Fabric, including orchestration services, distributed database, load balancing, and services enabling Anypoint Platform management. Worker - VMs dedicated for running Mule applications and API gateways. This separation of responsibilities enables scaling of the worker VMs based on the number of Mule applications. It also enables scaling the controller VMs based on the the frequency of deployments, changes in application state, and amount of inbound traffic. To ensure resources are available to re-schedule and re-deploy applications in the event of a hardware failure, we recommended over-provisioning the number of worker VMs. By default, the services operating Runtime Fabric are deployed across the controller VMs to avoid a single point of failure in the system. Anypoint Runtime Fabric uses a set of technologies, including Docker and Kubernetes, which are tuned to operate well with Mule runtimes. Knowledge of these technologies is not required to deploy or manage Mules on Runtime Fabric. Managing Runtime Fabric requires the operational and infrastructure-level experience needed to support any system at scale. We recommend following best practices and running fire drill scenarios in controlled environments to help prepare for unexpected failures. Comparing to Other PaaS Anypoint Runtime Fabric contains specific versions of all components required to function as designed. Each component, including Docker and Kubernetes, are tuned to operate efficiently with the Mule runtime and other MuleSoft services. Installing Runtime Fabric within an existing Kubernetes-based PaaS is not supported. For customers already running on a PaaS, it’s recommended to deploy Runtime Fabric alongside the PaaS to take advantage of the complete benefits of Anypoint Platform. Connectivity to Anypoint Cloud Control Plane Anypoint Runtime Fabric supports the following: Deploying applications from Anypoint Runtime Manager. Deploying policy updates of API gateways using API Manager. Integration with Anypoint Exchange to store and retrieve related assets. To enable this, Runtime Fabric establishes an outbound connection to Anypoint Cloud control plane via the AMQP protocol and secured using mutual TLS. A set of services running on the controller VMs initiate outbound connections to retrieve the metadata and assets required for application deployment. These services then translate and communicate with other internal services to cache the assets locally and deploy the application. Comparing to Standalone Mule Runtimes On-premise deployments of Mule applications require you to install a version of the Mule runtime on a server and deploy one or more applications on the server. Each application shares the resources available to the Mule runtime. Other resources such as certificates or database connections can also be shared using domains. On Anypoint Runtime Fabric, each Mule application and API gateway runs with their own Mule runtime, and in their own container. Each application deployment specifies the amount of resources the container can access. This enables Mule applications to horizontally scale across VMs without relying on dependencies. It also safeguards each application from competing with another application’s resources on the same VM.
https://docs.mulesoft.com/runtime-fabric/1.0/
2019-08-17T11:10:47
CC-MAIN-2019-35
1566027312128.3
[]
docs.mulesoft.com
Working with Synonyms¶ Synonym is one of the main data entity types in Apptus eSales Enterprise. Synonyms are used to extend searches of a phrase to include similar search phrases. Typical use cases include managing common misspellings, dialects, and slang words. Disclaimer App design and features are subject to change without notice. Screenshots, including simulated data visible, are for illustrative purposes only. Synonym basics¶ A synonym includes a locale, one search phrase, and one synonym phrase that will be used together with the search phrase. Synonyms can be set for individual locales or to be used world-wide, e.g. on all locales, and are primarily managed in the Synonyms tab in the Experience app. A retailer must use eSales search functions to be able to use synonyms with their search phrases. To use synonym evaluation and auto generated synonyms, notifications for adding-to-cart and payment, such as the Adding to cart notification and the Secure payment notification, must be implemented. Searching with synonyms¶ A search phrase goes through several stages of evaluation, where synonyms are applied early and is the third stage after tokenisation and stemming. The synonyms are matched with both the loose and the strict version of the search word. If matching with a synonym with a loose hit, only the loose synonym will be used. Multi word synonyms are stemmed by each word but will be matched as one phrase. For example, the search phrase Womens dress is tokenised and normalised as two words, womens and dress. Stemming will return womens as a strict match and women as a loose match for the first word, and dress as a strict match for the second word. These words will then be matched with the synonyms available in eSales. In this example, the synonyms found are female (loose match) and femal (loose match) based on womens and women, and skirt for dress (loose match). All combinations of strict, loose, and synonyms of the original phrase are then matched with product search attributes in the next stage of the search phrase evaluation. Did-you-mean is commonly used with the eSales search functions, but it does not take synonyms in consideration. Managing synonyms¶ Management of synonyms are primarily performed in the Synonyms tab in the Experience app. The Synonyms tab allows for adding, editing, and removing synonyms for individual locales and also for worldwide synonyms (synonyms used with all locales), as well as approving or rejecting auto generated synonyms and disable or activate. Synonyms can be filtered based on their creation type, active or disabled status, and evaluated impact. Adding synonyms¶ Apptus eSales Enterprise allows for the addition of synonyms in several different ways, including automatically generated synonyms. Upload Excel-files¶ Excel-files with synonyms can be uploaded in the Synonyms tab in the Experience app. For more information, see Working with Imports. Import the Synonym data entity type¶ The synonym data entity type can be imported via the Web API v2. For more information, see Working with Imports. Create individual synonyms¶ Individual synonyms can be created directly in the Synonyms tab in the Experience app by clicking the prominent plus icon in the top left hand corner. This will bring up a dialogue where the user can enter locale, search phrase, and synonym. Create from Search phrase report¶ A synonym can be created directly from a search phrase in the Search phrase report in the Experience app. When clicking the plus icon on the right hand side of the search phrase row, a dialogue will appear where the user can enter locale, search phrase, and synonym. The search phrase is automatically filled in with the selected phrase. Auto generated synonyms¶ Synonyms are automatically generated by eSales based on behavioural data, previous searches, and product attributes. Each synonym is evaluated before it is suggested as a synonym candidate. Before any suggested auto generated synonyms can be used they must be approved by a user in the Synonyms tab in the Experience app. Each auto generated synonym is denoted by a robot icon next to the synonym, and all auto generated synonyms are active once approved. A retailer must have notifications for adding-to-cart and payment implemented to use auto generated synonyms. Synonym status¶ The status of a synonym is either active or inactive, and a synonym must be active to be used in the search phrase evaluation. Setting synonyms as inactive is useful for occasions such as when seasonally used synonyms are out of season, or when the synonym evaluation has deemed a synonym to have a negative impact. Instead of removing them and re-adding them, they can temporarily be set as inactive instead. Changing the status of a synonym can be done per phrase, or in bulk when importing the synonym data type. Synonym evaluation¶ Synonyms are automatically evaluated by eSales on their impact in relation to purchases. Impact is either positive, negative, uncertain, or untested. An uncertain impact means that the synonym can not be considered to have an explicit positive or negative impact. Clicking a synonym will show a more detailed chart of the impact. A synonym must be present in 20 search queries that relate to purchases via an add-to-cart action before its impact is tested. A retailer must have notifications for adding-to-cart and payment implemented to use synonym evaluation. Panels supporting synonyms¶ The following predefined search panels utilise synonyms when evaluating search phrases.
https://docs.apptus.com/esales-enterprise/guides/working-with/synonyms/
2020-07-02T13:06:02
CC-MAIN-2020-29
1593655878753.12
[array(['../../../img/guides/working-with/synonyms/search-phrase-steps-1514x486.png', 'First steps of the search phrase evaluation'], dtype=object) array(['../../../img/guides/working-with/synonyms/synonyms-overview-1514x800.png', 'Synonyms tab in the Experience app'], dtype=object) array(['../../../img/guides/working-with/synonyms/synonyms-auto-generated-1514x910.png', 'Auto generated synonyms'], dtype=object) array(['../../../img/guides/working-with/synonyms/synonyms-edit-synonym-1514x800.png', 'Edit synonym - Synonym status'], dtype=object) ]
docs.apptus.com
An Act to create 103.10 (15) of the statutes; Relating to: exempting from the state family and medical leave law an employer that is covered under the federal family and medical leave law. (FE) Bill Text (PDF: ) Fiscal Estimates and Reports SB490 ROCP for Committee on Labor and Regulatory Reform (PDF: ) Wisconsin Ethics Commission information 2017 Assembly Bill 772 - A - Labor
https://docs.legis.wisconsin.gov/2017/proposals/reg/sen/bill/sb490
2020-07-02T13:13:48
CC-MAIN-2020-29
1593655878753.12
[]
docs.legis.wisconsin.gov
Description This article describes how to understand the Virtual Networks and Servers page, as accessed from the Compute dropdown menu on the Home page. Prerequisites: - Users with any role can view the Virtual Networks and Servers dashboard, but only a user with Primary Administrator, Network, or Server role can navigate to a Network or Server Dashboard. Content / Solution: Click on the Compute drop-down menu, and select Virtual Networks and Servers: You will be directed to the Virtual Networks and Servers page: Select the desired Region from the Region drop-down menu: From the Virtual Networks and Servers dashboard, you can perform several actions, including: Adding a Network or Network Domain - See: Deploying a Server - See: Connecting to the Region VPN - See: The Name column identifies the name of a Resource (Data Center, Network, Network Domain, Server). Click on the dropdown icon next to a resource to expand it. The Services column identifies which services are available in that Data Center for a particular resource. icon indicates that Backup services are available. icon indicates that Monitoring service is available. icon indicates that DRS for Cloud is available. icon indicates thatCloud Server Snapshot feature is available. If the icons are greyed out ( , , , ) this indicates that the service is available, but not enabled. - If the Backup icon is greyed out, it may indicate that Backups are unavailable. Hovering your mouse over the icon will indicate the reason The Primary IPv4 and Primary IPv6 columns display the respective IP addresses for each resource. - NIC Connection State - If a NIC has been disconnected, the Introduction to NIC Connection Status icon will be displayed. For more information on NIC Connection status, see - Note: If no such icon appears, the NIC is connected The CPU, RAM, and Storage columns display the respective counts of each resource. The last column contains the Manage gear - which, depending on the type of resource will allow you to take further actions against a resource, such as connect to the Data Center VPN or add a Network Domain/Network You can expand a Data Center (drill-down) by clicking on the drop-down button. Expanding a Data Center will display any deployed Network Domains: Once a Data Center has been expanded, you can then expand a Network Domain to display any deployed Servers associated with that Resource:Note: as you drill down into a Data Center, more information is displayed. For example, in the above image, the two MCP 1.0 Servers do not have associated IPv6 Addresses (available in MCP 2.0 Data Centers) so N/A is displayed; but you can see their allocated CPU, RAM, and Storage. The three MCP 2.0 Servers display IPv4, IPv6, CPU, RAM, and Storage. Note: If a Server has a "Managed Server" tag, it cannot be managed by users. Note: Hovering over an asset icon (Data Center, Network Domain, Server) will display a popup with additional information about the asset: Clicking on a Network Domain will direct you to the Network Domain Dashboard (respectively). See: Clicking on a Server will direct you to the Server Dashboard. See: Clicking on the Filter button in the Name, IPv4 or IPv6 columns will enable you to filter how your Cloud Resources are displayed. Click on the Filter Button. The filter dialog will be displayed: Select the desired filter from the drop-down menu: Enter the filter parameter, then click Filter: The system will display the filtered results. Managed Server Special Trait A Server tagged with the Managed Server special trait is a normal Cloud Server that has been marked for internal management (for example, as part of a managed service contract) and as such has limited access for User management, but is managed through additional services available through your service provider. Note: If a Server has been tagged with this special trait, you cannot access Virtual Console on the Server.
https://docs.mcp-services.net/display/CCD/Navigating+the+Virtual+Networks+and+Servers+Dashboard
2020-07-02T13:14:01
CC-MAIN-2020-29
1593655878753.12
[]
docs.mcp-services.net
Pods use ephemeral storage for their internal operation such as saving temporary files. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. Prior to OKD 3.10, ephemeral local storage was exposed to pods through the container’s writable layer, logs directory, and EmptyDir volumes. Issues related to the lack of local storage accounting and isolation include the following: Pods do not know how much local storage is available to them. Pods cannot request guaranteed local storage. Local storage is a best effort resource. Pods can get evicted due to other pods filling the local storage, after which, new pods are not admitted until sufficient storage has been reclaimed. Ephemeral storage is still exposed to pods in the same way, but there are new methods for implementing requests and limits on pods' consumption of ephemeral storage. It is important to understand that ephemeral storage is shared among all pods in the system, and that OKD does not provide any mechanism for guaranteeing any level of service beyond the requests and limits established by the administrator and users. For example, ephemeral storage does not provide any guarantees of throughput, I/O operations per second, or any other measure of storage performance. A node’s local storage can be broken into primary and secondary partitions. Primary partitions are the only ones you can use for ephemeral local storage. There are two supported primary partitions, root and runtime. Root Root partitions hold the kubelet’s root directory, /var/lib/kubelet/ by default, and /var/log/ directory. You can share this partition among pods, the operating system, and OKD system daemons. Pods can access this partition by using EmptyDir volumes, container logs, image layers, and container writable layers. OKD manages shared access and isolation of this partition. Runtime Runtime partitions are optional partitions you can use for overlay file systems. OKD attempts to identify and provide shared access along with isolation to this partition. This partition contains container image layers and writable layers. If the runtime partition exists, the root partition does not hold any image layer or writable layers.
https://docs.okd.io/3.11/scaling_performance/optimizing_ephemeral_storage.html
2020-07-02T12:41:23
CC-MAIN-2020-29
1593655878753.12
[]
docs.okd.io
We need help filling out this section! Feel free to follow the edit this page link and contribute. Brief tutorial on showing quantity, full explanation of customizing add to cart next page. Previously we learned how to create a product catalog. Now, lets customize the Add to cart form a little bit. We will learn how to add quantity field in that form. Go to /admin/commerce/config/order-item-types/default/edit/form-display/add_to_cart. Drag the Quantity field, and Save the form. Go ahead and refresh the /products page. And voila!! You can now choose quantity while adding products to cart. Found errors? Think you can improve this documentation? edit this page
https://docs.drupalcommerce.org/commerce2/developer-guide/cart/quantity-on-cart-form
2020-07-02T11:43:26
CC-MAIN-2020-29
1593655878753.12
[]
docs.drupalcommerce.org
This article explains how to install Fluentd using Chef. Please follow the Preinstallation Guide to configure your OS properly. This will prevent many unnecessary problems. The chef recipe to install td-agent can be found here. Please import the recipe, add it to run_list, and upload it to the Chef Server. Please run chef-client to install td-agent across your machines..
https://docs.fluentd.org/v/0.12/articles/install-by-chef
2020-07-02T13:29:26
CC-MAIN-2020-29
1593655878753.12
[]
docs.fluentd.org
Release 3.3.0 The release spans the period between 2018-11-01 to 2018-11-09 The following tickets are included in this release. - Make service instance status more transparent - Add quota charts to project dashboard Ticket DetailsTicket Details Make service instance status more transparentMake service instance status more transparent Audience: All users Component: meshfed DescriptionDescription The status of service instances is now correctly called "Last Operation". It shows only the status of the last operation, not the current status of the service. Therefore the operation type (create, update, delete) is now displayed in the status badge as well. Details about the last operation are available as tooltips on the status badge. Add quota charts to project dashboardAdd quota charts to project dashboard Audience: All users Component: release DescriptionDescription Users have now an overview about all platform instance quotas.
https://docs.meshcloud.io/blog/2018/11/09/Release-0.html
2020-07-02T12:17:43
CC-MAIN-2020-29
1593655878753.12
[]
docs.meshcloud.io
And Today I Shall Mostly Be Reading… … on my phone. Dolt that I am (wondering how I could get this document from my PC to my phone when all I had to do was click on the download link in IE on the phone) it took me a minute or two to figure this out. In fact, the experience is really nice – click the DL link and you’re informed you need the PDF reader app and offered the option to get it from Marketplace. I was one step ahead here and had already initiated the install of the Adobe Reader app. Click and up it comes. And it’s easily accessible from the start screen of the reader app. Nice for reading on the plane today on my way to Aberdeen later today.
https://docs.microsoft.com/en-us/archive/blogs/mikeormond/and-today-i-shall-mostly-be-reading
2020-07-02T13:39:25
CC-MAIN-2020-29
1593655878753.12
[]
docs.microsoft.com
Distribute a quiz When you finish configuring the answers for the quiz questions, you are ready to distribute the quiz. About this task You can send the quiz to all the category users configured for the quiz or to a single category user. Procedure Navigate to Quiz Management > Quizzes. Open the quiz record, and click Publish. The quiz is placed in the Published state, and it is sent to all its category users. You can edit and resend published quizzes. See Modifying Published Quizzes to learn how various modifications affect the quiz contents. To resend a quiz, click the appropriate button: Assign Quiz: Send the quiz to one category user. Send Quizzes: Send the quiz to all of its category users. Note: These buttons are hidden if there are no category users defined for the the quiz. Related tasksCreate a quizModify a published quiz
https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/administer/assessments/task/t_DistributeAQuiz.html
2020-07-02T13:45:20
CC-MAIN-2020-29
1593655878753.12
[]
docs.servicenow.com
Blocks¶ The Block class¶ To write a new Block subclass, we need to write the following: - the __init__that validates the arguments when constructing the block - the get_sources_and_requeststhat processes the request - the processthat processes the data - a number of attributes such as extentand period About the 2-step data processing¶ The get_sources_and_requests method of any block is called recursively from get_compute_graph and feeds the request from the block to its sources. It does so by returning a list of (source, request) tuples. During the data evaluation each of these 2-tuples will be converted to a single data object which is supplied to the process function. First, an example in words. We construct a View add = RasterFileSource('path/to/geotiff') + 2.4 and ask it the following: - give me a 256x256 raster at location (138000, 480000) We do that by calling get_data, which calls get_compute_graph, which calls get_sources_and_requests on each block instance recursively. First add.get_sources_and_requests would respond with the following: - I will need a 256x256 raster at location (138000, 480000) from RasterFileSource('/path/to/geotiff') - I will need 2.4 Then, on recursion, the RasterFileSource.get_sources_and_requests would respond: - I will give you the 256x256 raster at location (138000, 480000) These small subtasks get summarized in a compute graph, which is returned by get_compute_graph. Then get_data feeds that compute graph to dask. Dask will evaluate this graph by calling the process methods on each block: - A raster is loaded using RasterFileSource.process - This, together with the number 2.4, is given to Add.process - The resulting raster is presented to the user. Implementation example¶ As an example, we use a simplified Dilate block, which adds a buffer of 1 pixel around all pixels of given value: class Dilate(RasterBlock): def __init__(self, source, value): assert isinstance(source, RasterBlock): value = float(value) super().__init__(source, value) @property def source(self): return self.args[0] @property def value(self): return self.args[1] def get_sources_and_requests(self, **request): new_request = expand_request_pixels(request, radius=1) return [(self.store, new_request), (self.value, None)] @staticmethod def process(data, values=None): # handle empty data cases if data is None or values is None or 'values' not in data: return data # perform the dilation original = data['values'] dilated = original.copy() dilated[ndimage.binary_dilation(original == value)] = value dilated = dilated[:, 1:-1, 1:-1] return {'values': dilated, 'no_data_value': data['no_data_value']} @property def extent(self): return self.source.extent @property def period(self): return self.source.period In this example, we see all the essentials of a Block implementation. - The __init__checks the types of the provided arguments and calls the super().__init__that further initializes the block. - The get_sources_and_requestsexpands the request with 1 pixel, so that dilation will have no edge effects. It returns two (source, request) tuples. - The process(static)method takes the amount arguments equal to the length of the list that get_sources_and_requestsproduces. It does the actual work and returns a data response. - Some attributes like extentand periodneed manual specification, as they might change through the block. - The class derives from RasterBlock, which sets the type of block, and through that its request/response schema and its required attributes. Block types specification¶ A block type sets three things: - the response schema: e.g. “RasterBlock.process returns a dictionary with a numpy array and a no data value” - the request schema: e.g. “RasterBlock.get_sources_and_requests expects a dictionary with the fields ‘mode’, ‘bbox’, ‘projection’, ‘height’, ‘width’” - the attributes to be implemented on each block This is not enforced at the code level, it is up to the developer to stick to this specification. The specification is written down in the type baseclass RasterBlock() or GeometryBlock(). API specification¶ Module containing the core graphs. - class dask_geomodeling.core.graphs. Block(*args)¶ A class that generates dask-like compute graphs for given requests. Arguments (args) are always stored in self.args. If a request is passed into the Block using the get_dataor (the lazy version) get_compute_graphmethod, the Block figures out what args are actually necessary to evaluate the request, and what requests need to be sent to those args. This happens in the method get_sources_and_requests. After the requests have been evaluated, the data comes back and is passed into the processmethod. - classmethod deserialize(val, validate=False)¶ Deserialize this block from a dict containing version, graph and name get_compute_graph(cached_compute_graph=None, **request)¶ Lazy version of get_data, returns a compute graph dict, that can be evaluated with compute (or dask’s get function). The dictionary has keys in the form name_tokenand values in the form tuple(process, *args), where argsare the precise arguments that need to be passed to process, with the exception that args may reference to other keys in the dictionary. get_graph(serialize=False)¶ Generate a graph that defines this Block and its dependencies in a dictionary. The dictionary has keys in the form name_tokenand values in the form tuple(Block class, *args), where argsare the precise arguments that were used to construct the Block, with the exception that args may also reference other keys in the dictionary. If serialize == True, the Block classes will be replaced by their corresponding import paths. get_sources_and_requests(**request)¶ Adapt the request and/or select the sources to be computed. The request is allowed to differ per source. This function should return an iterable of (source, request). For sources that are no Block instance, the request is ignored. Exceptions raised here will be raised before actual computation starts. (at .get_compute_graph(request)). - static process(data)¶ Overridden to modify data from sources in unlimited ways. Default implementation passes single-source unaltered data. dask_geomodeling.core.graphs. construct(graph, name, validate=True)¶ Construct a Block with dependent Blocks from a graph and endpoint name.
https://dask-geomodeling.readthedocs.io/en/latest/blocks.html
2020-07-02T12:33:26
CC-MAIN-2020-29
1593655878753.12
[]
dask-geomodeling.readthedocs.io
The persistent menu allows you to set navigation to help Facebook page subscribers to discover and more easily access your functionality throughout their conversation and is always available. You can easily configure this menu via Botgento panel. To do so, navigate to Customization > Persistent menu page which looks like below screen. Here, you can create single or nested level persistent menu from left section of this page. Note that maximum 3 items to top navigation, maximum 5 items for nested menu and up-to 3 hierarchy is allowed. Refer Facebook Persistent menu reference for more details. There are 4 different item type available which you can configure for persistent menu. 1. Sub-menu If you want to set current item as parent node, select its type to Submenu. 2. Link Allows you to set external URL link for menu item. 3. Blocks Assign blocks as menu item which will rendering selected blocks while clicking on it. You can choose multiple block at once. 4. Actions Allows you to set predefined methods for particular item. You can choose any of below method: SHOPMORE which shows Magento catalog details MY ORDER shows recent 5 orders MY WISHLIST will list subscriber wishlist SUBSCRIBE allows to opt-in for messenger updates UNSUBSCRIBE give ability to opt-out for updates Any changes made in persistent menu is reflected after few minutes in messenger and also require reloading messenger window. If all goes well, you can see persistent menu in Facebook messenger similar to below screen. Head over to Facebook Best Practices before customize persistent menu. You can submit your concerns to our help center.
https://docs.botgento.com/customization/persistent-menu
2020-07-02T11:35:58
CC-MAIN-2020-29
1593655878753.12
[]
docs.botgento.com
WGA Internet Explorer 7 issue Last week I had some issues convincing my machine that it could indeed run the WGA tool I came across a bunch of genuine advantage tips and tricks to help along the way: Please download and run: - this is the latest WGA Notifications file. Afterward, login as administrator and head to and click Validate Windows. I am still working on the following issue however: --------------------------- Windows Genuine Advantage ---------------------------72f19] --------------------------- OK --------------------------- Update: Alin Constantin writes in this forum post: "As for the 0x80072F19, it means ERROR_INTERNET_SEC_CERT_REV_FAILED."
https://docs.microsoft.com/en-us/archive/blogs/davidmcg/wga-internet-explorer-7-issue
2020-07-02T13:41:14
CC-MAIN-2020-29
1593655878753.12
[]
docs.microsoft.com
Welcome to Upodi Docs. You'll find comprehensive help, guides and developer information to help you get started with the basics and move to the advanced flows. Let's jump right in! I am a User I am a Developer Fasten your onboarding with guides and articles. Learn which subscription best-practives might help you. API reference, code examples and dot-code. We update this section regularly to help our community. Use the Intercom icon, or contact helpdesk at [email protected]. Btw. did you find the owls? We have hidden 5 owls in our documentation.
https://docs.upodi.com/?page=1
2020-07-02T12:14:18
CC-MAIN-2020-29
1593655878753.12
[]
docs.upodi.com
MongoDB is a NoSQL type, open-source document database. This sample demonstrates the usage of MongoDB as a data source in WSO2 DSS. About the sample This sample data service contains the operations listed below. See Data Services and Resources for a definition of data services and operations. - mongo_insert: This operation adds a document according to the provided id, name. - mongo_insert_doc: Using this operation you can inserts a document into the 'things' collection: - mongo_find: This operation returns all documents from the collection. - mongo_count: This operation counts and returns the number of all documents in the 'things' collection - mongo_update: This operation sets the name as 'Zack' and the id as the provided value for the document where name is 'Bob' - mongo_remove: This operation removes all the documents from the collection 'things' where id is equal to the given value - mongo_drop: This operation will drop the collection 'things' from the database Prerequisites A MongoDB server v2.4.x or v2.2.x should be already running in the default port. Create a collection as below in the command shell. mongo use mydb db.createCollection("things") db.things.insert( { id: 1, name: "Document1" } ) Building the sample The sample data service named MongoDBSample should be deployed using the instructions in the Samples Setup section. Running the sample The sample can be run using any SOAP client such as the Tryit tool that comes bundled with WSO2 DSS. Follow the steps below to demonstrate this functionality using the TryIt tool: - Log in to the management console of your server and click List under Services in the navigator. The MongoDBSamplewill be listed here. - Click Try this service to open the TryIt tool. - Select the relevant operation and click Send to execute the commands as shown below. - Invoking the operation 'mongo_insert' to insert a document - Invoking the operation 'mongo_find' to retrieve the data in the collection
https://docs.wso2.com/display/DSS322/MongoDB+Sample
2020-07-02T13:25:04
CC-MAIN-2020-29
1593655878753.12
[]
docs.wso2.com
This chapter doesn't intend to describe the idea that stands behind CQS/CQRS. If you would like to learn more about it, there are many good resources that explain the topic in a better way. $ yarn add @marblejs/core @marblejs/messaging rxjs fp-ts The design has as its main concern, the separation of read and write operations The pattern divides methods into three categories: commands, queries and events. Marble.js implements a dedicated “local” transport layer for EventBus messaging, which can be easily adopted to CQS/CQRS pattern. Compared to other popular implementations, the module implements only one, common bus for transporting events of any type. Dispatching a command is akin to invoking a method: we want to do something specific. This means we can only have one place, one handler, for every command. This also means that we may expect a reply: either a value or an error. Firing command is the only way to change the state of our system - they are responsible for introducing all changes to the system. {type: 'CREATE_USER',payload: {firstName: 'John',lastName: 'Doe',}} Queries never modify the database - they only perform read operations so they don't affect the state of the system. A query returns a DTO that does not encapsulate any domain knowledge. {type: 'GET_USER_BY_ID',payload: {id: '#ABC123',}} When dispatching an event, we notify the application about the change. We do not get a reply and there might be more than one effect handler registered. {type: 'USER_CREATED',payload: {id: '#ABC123',}} In order to initialize event bus you have bind the dependency to the server. Since the context resolving can occur in an asynchronous way the dependency has to be bound eagerly (on app startup). The factory inherits all bounded dependencies from the server, so there is not need to register the same dependencies one more time. import { bindEagerlyTo } from '@marblejs/core';import { EventBusToken, eventBus } from '@marblejs/messaging';const listener = messagingListener({middlewares: [ ... ],effects: [ ... ],});// ...bindEagerlyTo(EventBusToken)(eventBus({ listener })), EventBus client is a specialized form of messaging client that operates on local transport layer. It can be injected into any effect via EventBusClientToken. Due to its async nature it has to be bound eagerly on app startup. import { bindEagerlyTo } from '@marblejs/core';import { EventBusClientToken, eventBusClient } from '@marblejs/messaging';// ...bindEagerlyTo(EventBusClientToken)(eventBusClient), Similar to every messaging client, the reader exposes two main methods: Let's build a simple event-based app that demonstrates how to dispatch commands from an endpoint and process them in a command handler. import { act, useContext, matchEvent } from '@marblejs/core';import { reply, MsgEffect } from '@marblejs/messaging';import { mergeMap } from 'rxjs/operators';import { pipe } 'fp-ts/lib/pipeable';import { createUser } from './user.model';import { UserCommand } from './user.commands';import { UserRespositoryToken } from './tokens';export const createUser$: MsgEffect = (event$, ctx) => {const userRepository = useContext(UserRespositoryToken)(ctx.ask);return event$.pipe(matchEvent(UserCommand.createUser),act(event => pipe(event.payload,createUser,userRepository.persist,mergeMap(user => [UserEvent.userCreated(user.id),reply(event)({ type: event.type }),]),)),);}; The logic of the command handler is quite simple: match all incoming events (commands) of specific type ( CREATE_USER) create domain object and persist via injected repository notify all interested parties that user was created return a confirmation back to client import { r, HttpStatus, useContext, use } from '@marblejs/core';import { map, mapTo, mergeMap } from 'rxjs/operators';import { EventBusClientToken } from '@marblejs/messaging';import { requestValidator$, t } from '@marblejs/middleware-io';import { UserCommand } from './user.commands';import { pipe } from 'fp-ts/lib/pipeable';const validator$ = requestValidator$({body: t.type({firstName: t.string,lastName: t.string,}),});export const postUser$ = r.pipe(r.matchPath('/user'),r.matchType('POST'),r.useEffect((req$, ctx) => {const eventBusClient = useContext(EventBusClientToken)(ctx.ask);return req$.pipe(use(validator$),mergeMap(req => {const { firstName, lastName } = req.body;return pipe(UserCommand.createUser(firstName, lastName),eventBusClient.send,);}),mapTo({ status: HttpStatus.CREATED }),);})); The implementation of postUser$ effect is also very simple. First we have to inject the EventBus client from the context and map the incoming request to CREATE_USER command. Since we want to notify the API consumer about success or failure of operation, we have to wait for the response. In order to do that we have to dispatch an event using send method. import { httpListener, createServer, bindEagerlyTo } from '@marblejs/core';import { messagingListener, EventBusToken, EventBusClientToken, eventBusClient, eventBus } from '@marblejs/messaging';import { bodyParser$ } from '@marblejs/middleware-body';import { logger$ } from '@marblejs/middleware-logger';import { postUser$ } from './postUser.effect';import { createUser$ } from './createUser.effect';const eventBusListener = messagingListener({effects: [createUser$,],});const listener = httpListener({middlewares: [logger$(),bodyParser$(),],effects: [postUser$,],});export const server = createServer({listener,dependencies: [bindEagerlyTo(EventBusClientToken)(eventBusClient),bindEagerlyTo(EventBusToken)(eventBus({ listener: eventBusListener })),],});const main: IO<void> = async () =>await (await server)();main();
https://docs.marblejs.com/messaging/cqrs
2020-07-02T12:46:01
CC-MAIN-2020-29
1593655878753.12
[]
docs.marblejs.com
Searching in Libraries -.
https://docs.toonboom.com/help/harmony-17/essentials/library/search-library.html
2020-07-02T11:52:48
CC-MAIN-2020-29
1593655878753.12
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Library/HAR11/HAR11_005_SearchLibrary_001.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Library/HAR11/HAR11_005_SearchLibrary_002.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Library/HAR11/HAR11_005_SearchLibrary_003.png', None], dtype=object) ]
docs.toonboom.com
Playing Back the Animatic You can preview your project as an animatic in Storyboard Pro at any time during its development process. Using the Playback toolbar, you can preview the visual content, including transformations and transitions, and have it synchronized with sounds. - In the Playback toolbar, click the Sound button. If you want to see how the shots will look with dynamic camera movement, click the Camera Preview button. You will need this option on to preview Camera moves and transitions. When you drag the timeline playhead while Camera Preview is enabled, it will adjust the Stage view to match the point of view of the camera. - In the Timeline or Thumbnails view, select the panel where you want the playback to begin. - In the Playback toolbar, click the Play Selection or Play buttons or press Shift + Enter. - To play your project in a continuous loop, click the Loop button. - You may also scroll through the Timeline view by dragging the red playhead. - Select Play > Previous Frame or Next Frame to skip and play back one frame at a time. Or press comma (,) and period (.).
https://docs.toonboom.com/help/storyboard-pro-7/storyboard/timing/play-back-animatic.html
2020-07-02T12:38:08
CC-MAIN-2020-29
1593655878753.12
[array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
The Datadog Lambda Layer is responsible for: dd-tracelibrary, letting customers trace their Lambda functions with Datadog’s tracing libraries, currently available for Node.js, Python, and Ruby with more runtimes coming soon.. Installation steps:: # Whether to add the Lambda Layers, or expect the user to bring their own. Defaults to true. addLayers: true # The log level, set to DEBUG for extended logging. Defaults to info. logLevel: "info" # Send custom metrics via logs with the help of Datadog Forwarder Lambda function (recommended). Defaults to false. flushMetricsToLogs: false # Which Datadog Site to send data to, only needed when flushMetricsToLogs is false. Defaults to datadoghq.com. site: datadoghq.com # datadoghq.eu for Datadog EU # Datadog API Key, only needed when flushMetricsToLogs is false apiKey: "" # Datadog API Key encrypted using KMS, only needed when flushMetricsToLogs is false apiKMSKey: "" # Enable tracing on Lambda functions and API Gateway integrations. Defaults to true enableXrayTracing: true # Enable tracing on Lambda function using dd-trace, datadog's APM library. Requires datadog log forwarder to be set up. Defaults to true. enableDDTracing: true # When set, the plugin will try to subscribe the lambda's cloudwatch log groups to the forwarder with the given arn. forwarder: arn:aws:lambda:us-east-1:000000000000:function:datadog-forwarder Setting flushMetricsToLogs: true is recommended for submitting custom metrics via CloudWatch logs with the help of Datadog Forwarder. below): Globals: Function: Tracing: Active Environment: Variables: DD_API_KEY: YOUR_DATADOG_API_KEY Api: TracingEnabled: true You can also include the Datadog Lambda package directly in your project either from source or with the standard package manager for your runtime. Note: AWS SAM supports downloading Lambda Layers for local development. You can configure the Datadog Lambda Layer by adding environment variables to your Lambda functions: Additional helpful documentation, links, and articles:
https://docs.datadoghq.com/infrastructure/serverless/datadog_lambda_layer/
2020-07-02T12:57:02
CC-MAIN-2020-29
1593655878753.12
[]
docs.datadoghq.com
What is a TEX File?. TeX input files are based on ASCII-code, thereby allowing manuscript sharing among writers, publishing managers and critics. A wide variety of computing environments, almost every modern platform and lot of older platforms support TeX. Moreover, TeX is a free software, available to a wide range of consumers. Many UNIX installations use both UNIX troff and TeX as their formatting system for different purposes. Other typesetting tasks are performed tremendously in the form of LaTeX, ConTeXt, and other macro packages. Brief History TeX was designed and written by Donald Knuth in 1978. Guy Steele from Massachusetts Institute of Technology revised input/output of TeX to make it run under the Incompatible operating system like Timesharing System (ITS). The first version of TeX was developed under Stanford’s WAITS operating system in the programming language (SAIL) and tested to run on a PDP-10. Knuth introduced the idea of literate programming for advance versions. Literate programming is a way of generating compilable source code and typeset (in TeX) for cross-linked documentation using the original file. The language used to develop these advanced versions of TeX is called WEB, a mixture of DEC PDP-10 Pascal programs to ensure portability. A revised new version of TeX published in 1982 and was called TeX82. The major change is the replacement of the original hyphenation algorithm with the newly written algorithm by Frank Liang. To ensure portability across different platforms, Instead of using floating-point, TeX82 uses fixed-point arithmetic along with a real, turing-complete programming language. In 1989, a new versions of TeX and Metafont was released. So the version 3.0 of TeX facilitates 8-bit inputs, allowing 256 different characters in the text. After version 3, updates are denoted by adding an extra digit at the end of the decimal e.g. current version of TeX is indicated as 3.14159265. This version was last updated 12-1-2014. TeX Input An Input file to TEX can be prepared with a text editor using ordinary text. Unlike a typical Word processor, this input file disallows any invisible control characters. One file can be embedded into another file, containing macro definitions and auxiliary definitions that enhances TeX’s capabilities. If a TeX installation comes with any macro files, the local information about TeX demonstrates about using macro files. The standard form of TeX, integrates a combination of macros and other definitions known as plain TEX. On the basis of precise knowledge of the sizes of all characters and symbols, it calculates the optimum organization of letters per line and lines per page. At the time of document processing, a .dvi file is produced, where “dvi” stands for “device independent”. Device driver programs are required for printing or previewing the document with a dvi extension. Nowadays, dvi generation is bypassed by a commonly used pdf- TeX. No prior knowledge of fonts is available within TeX installation, so external font files, which are part of local TeX environment are used to obtain information for document. Typesetting System About 300 primitives (commands) can be understand by the base TeX system. Primitives are low-level commands, therefore a common user rarely used them directly and most functionality is performed by format files. These format file are preloaded memory images of TeX which are followed by the loading of large macro collections. The original default format of the language i.e plain TeX adds about 600 commands. A backslash grouped with curly braces denotes the starting of TeX commands. Since TeX is a macro and token based language, almost all of TeX’s syntactic characteristic can be changed at run time, including user-defined ones except unexpandable tokens which are then executed. Expansion itself is practically trouble free. Some commands need to come after an arguments that help to explain the function of a command. For instance, the \vskip command directs TEX to skip down/up the page followed by an argument determining how much space to skip. Versions LaTeX is the most frequently used format which is originally developed by Leslie Lamport. LaTeX integrates different document styles for files, letters, books and slides and offers referencing and automatic numbering for different sections and mathematical expressions. AMS-TeX is another popular format, developed by the American Mathematical Society. AMS-TeX offers a lot more user-friendly commands, which can be redefined by journals to fit with their local style. LaTeX can take the benefits of AMS-TeX by using the AMS “packages” which is then termed as AMS-LaTeX. ConTeXt is another format written by Hans Hagen used mainly for desktop publishing. The TeX software offers several features that were unavailable, or of lower quality, in other typesetting systems at the time of its creation. Some of the innovative features of this language are based on interesting algorithms derived from the theses of Knuth’s students. While other typesetting programs are now incorporating useful features of TeX into their programs.
https://docs.fileformat.com/page-description-language/tex/
2020-07-02T12:49:40
CC-MAIN-2020-29
1593655878753.12
[]
docs.fileformat.com
pxf A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 6.x documentation. Manage the PXF configuration and the PXF service instance on the local Greenplum Database host. Synopsis pxf <command> [<option>] where <command> is: cluster help init. (Use the pxf cluster command
https://gpdb.docs.pivotal.io/6-4/pxf/ref/pxf.html
2020-07-02T13:41:28
CC-MAIN-2020-29
1593655878753.12
[]
gpdb.docs.pivotal.io
This article describes how to get the internal Fluentd metrics via REST API. Fluentd has a monitoring agent to retrieve internal metrics in JSON via HTTP. Please add the following lines to your configuration file. <source>@type monitor_agentbind 0.0.0.0port 24220</source> Next, please restart the agent and get the metrics via HTTP. $ curl{"plugins":[{"plugin_id":"object:3fec669d6ac4","type":"forward","output_plugin":false,"config":{"type":"forward"}},{"plugin_id":"object:3fec669dfa48","type":"monitor_agent","output_plugin":false,"config":{"type":"monitor_agent","port":"24220"}},{"plugin_id":"object:3fec66aead48","type":"forward","output_plugin":true,"buffer_queue_length":0,"buffer_total_queued_size":0,"retry_count":0,"config":{"type":"forward","host":"192.168.0.11"}}]} See in_monitor_agent article for more detail. Use flowcounter or flowcounter_simple plugin. Datadog is a cloud monitoring service, and its monitoring agent dd-agent has native integration with Fluentd. Please refer this documentation for more details. If this article is incorrect or outdated, or omits critical information, please let us know. Fluentd is a open source project under Cloud Native Computing Foundation (CNCF). All components are available under the Apache 2 License.
https://docs.fluentd.org/deployment/monitoring-rest-api
2020-07-02T13:22:55
CC-MAIN-2020-29
1593655878753.12
[]
docs.fluentd.org
The following is a list of features and fixes that have been included in the latest FloWorks releases. FloWorks 21.0.0 - User manual re-organized and updated. - Dashboard property panels added for FloWorks charts. - Added option to make flow tank multi-product. - New version of the mass-flow conveyor. - 'Optimize network before solving' is out of beta and now the default Flow Control mode for new models. - Bug fix: exception when switching to named product table fixed. - Bug fix: Flow pipe passes name instead of product ID in first On Product Out Change after reset. - Bug fix: Removed redundant license check in model limit panel. FloWorks 20.2.1 (02 September 2020) - Bug fix: Statistics panel in Quick Properties was showing incorrectly for non-FloWorks objects. - Bug fix: Restored missing icon for FloWorks Custom Action activity. - Bug fix: Unlimited flow rate checkbox fixed for Flow Source properties. - Bug fix: Fix compiler error in FlowObject.(in/out)put.ports[i].ratio - All bug fixes included in version FloWorks 20.0.3 below. FloWorks 20.2.0 (20 August 2020) - FloWorks Process Flow templates added to Toolbox menu. - New chart templates for Dashboard Library and Pin buttons. - FloWorks Product Table is now globally accessible through the Toolbox. - FloWorks colors for new models are now determined by Color Palette. - Textures can be applied to products when using FloWorks Product Table. - All bug fixes included in FloWorks 20.0.2 below. FloWorks 20.1.3 (20 August 2020) - All bug fixes included in FloWorks 20.0.2 below. FloWorks 20.1.2 (15 June 2020) - Bug fix: Fixed exception in release of Flow To Item. - Bug fix: Removed internal output to console. - Bug fix: Removed redundant license check in old solver. FloWorks 20.1.1 (23 April 2020) - Bug fix: Update script for FloWorks Custom Action activities. - Bug fix / improvement: Library grid updated. FloWorks 20.1.0 (17 April 2020) - Release for FlexSim 2020 Update 1. FloWorks 20.0.4 - Bug fix: Flow pipe passes name instead of product ID in first On Product Out Change after reset. - Bug fix: Removed redundant license check in model limit panel. FloWorks 20.0.3 (02 September 2020) - Bug fix: Input / output triggers were sometimes scheduled at infinity. FloWorks 20.0.2 (20 August 2020) - Bug fix: Set content activity uses FlowTank in code header. Added "Max. content" option to picklist. - Bug fix: Process Flow activity did not correctly set relative output trigger amount. - Bug fix: Flow item tank reset content at end of warmup. - Bug fix: Adding Process Flow activity created module dependency. - Bug fix: Documentation for flow processor updated. - Bug fix: Update script for FloWorks Custom Action activities. - Bug fix / improvement: Library grid updated. - Bug fix: Fixed exception in release of Flow To Item. - Bug fix: Removed internal output to console. - Bug fix: Removed redundant license check in old solver. FloWorks 20.0.1 (04 February 2020) - Bug fix: Fixed incorrect berth configurations. - Bug fix: Removed popup during startup. - Bug fix: MODE_RESTART_FALLINGwas sometimes incorrectly #defined the same as MODE_RESTART_RISING. - Bug fix: Several bug fixes and improved stability for the new (beta) solver. - Documentation: updated Tank trigger levels description. FloWorks 20.0.0 (13 December 2019) - Feature: Added Process Flow activities for FloWorks. - Feature: Added Process Flow templates for Basic Berth and Tank Pool. - User request: Added option not to checkout FloWorks license. - Bug fix: Removed inconsistent behavior after (de)activating license. - Bug fix: Tank level trigger could cause event list to become unsorted. - Bug fix: Max. Object Depth setting in FloWorks charts was ignored. FloWorks 19.2.2 (13 December 2019) - Bug fix: FlexSim 19.2.4 removed node that was overridden in FloWorks - All changes included in version 19.0.8. FloWorks 19.2.1 (18 October 2019) - Bug fix: allow objects to be created when no runtime license present. - All changes included in version 19.0.7. FloWorks 19.2.0 (26 September 2019) - Flow pump / valve input and output can be set to balanced rate by % simultaneously. - Flow To Item can now buffer additional material before blocking. - New segmented flow pipe object. - Created new runtime license type. - All changes included in version 19.0.5. FloWorks 19.1.5 (18 October 2019) - All changes included in version 19.0.7. FloWorks 19.1.4 (09 October 2019) - Bug fix: Restored objects missing from library and icon grid. FloWorks 19.1.3 (26 September 2019) - All changes included in version 19.0.5. FloWorks 19.1.2 (19 July 2019) - Bug fix: dragging in a shape from a group will no longer create the first shape in the group. - All changes included in version 19.0.4. FloWorks 19.1.1 FloWorks 19.1.1 was an internal release which has not been released publicly. FloWorks 19.1.0 (01 April 2019) - Added support for unconstrained flows. - All changes included in version 19.0.2. - Option "Unchanged" removed from "Set flow rate" triggers -- this is now considered as "Unconstrained". Use the "Set maximum input/output rate" option instead of "Set maximum rates" if you only want to change either the input or the output rate. -.8 (13 December 2019) - Bug fix: Tank level trigger could cause event list to become unsorted. - Bug fix: Max. Object Depth setting in FloWorks charts was ignored. FloWorks 19.0.7 (18 October 2019) - Request: Added Waiting for Transport state to Loading Point. - Bug fix: Flow Content Histogram was broken. - Bug fix: Fixed rounding errors when inflow very close to outflow. - Bug fix: Fixed rounding error in tank top/bottom duration calculation. - Bug fix: Avoid duplicate event when recalculation requested before Flow Control resets. FloWorks 19.0.6 (09 October 2019) - Bug fix: Restored objects missing from library and icon grid. FloWorks 19.0.5 (26 September 2019) -.4 (19 July 2019) - Various bug fixes to the Mass Flow Conveyor. - Bug fix: Utiliation returns (flow rate)/(max flow rate) at time zero. - Bug fix: Solved exception when dragging different object shape into the model. FloWorks 19.0.3 (30 April 2019) - Loading arm state will show Blocked and Starved instead of Idle while transporter is connected. - "Pass product downstream" option added to product change trigger of flow processors. - Bug fix: Transporters were sometimes positioned incorrectly when entering Loading Point. - Bug fix: Flow object states were not updated. - Bug fix: Fixed error when calling state() with no arguments. - Bug fix: On Entry and On Exit in trigger list of Flow To Item and Item To Flow fixed. - Bug fix: "Pass product downstream trigger" threw exception when connected to Flow To Item. FloWorks 19.0.2 (01 April 2019) - Feature: Flow tank can scale in both directions (elliptical) instead of only using x-size for diameter. - Bug fix: Various bug fixes to beta version of new optimizer/solver. - Bug fix: Removed "content is larger than max. content" message during reset. - Bug fix: Fixed exception in "Pass product downstream" when pipe is not connected to anything. - Bug fix: Corrected normals on flow blender and flow splitter shapes. - Bug fix: FlowObject.input/output.ports[index] now accepts Variant(e.g. token.Port) as index and does bounds checking on index. - Bug fix: Spheres no longer drawn outside flow pipes shorter than 2 m. - Bug fix: Quick Properties only shows a single flow rate for flow pipes; output rate set to input rate on reset. FloWorks 19.0.1 (08 March 2019) FlowToItemand ItemToFlowadded to script so that rates and impact factors can be read and set. - Added more shapes for Flow Tank and Mixer. - Added "Change product by case" to trigger options. - Bug fix: Flow conveyors now have state profile consistent with that of Flow Tank. - Bug fix: Product color picker samples colors instead of objects again. - Bug fix: Fixed incorrect layers being drawn during filling of Flow Mixer when multiple steps require input from the same port. - Bug fix: Fixed incorrect states on Loading Point due to duplicate state_currentnode. - Bug fix: Product field or dropdown will now preserve selected value instead of resetting to current product when switching property tab pages. - Bug fix: Flow Task Executer connects itself to default network navigator on creation. - Bug fix: Flow statistics now behave correctly under model warmup. - User manual: Corrected description of FlowObject.stop()in documentation. - User manual: Documented manual loading feature when Loading Points have 0 loading time. - Loading point continues with next transporter after releasing completed item instead of waiting for it to exit. FloWorks 19.0.0 (27 February 2019) - Beta: Flow control can optimize network before solving. (Optimization is disabled by default, can be enabled for models with many (effective) single connections.) - Content-holding objects now have On Trigger Level event that allows e.g. Process Flow to wait for a specific level trigger. - Optimized event scheduling: obsolete events are removed from the event list instead of ignored. - Statistics are now kept in standard FlexSim tracked variables under the statsnode. If you use dashboards, you may need to rebuild some charts. You can mostly use the standard FlexSim chart templates, listening to the On Rate Change event or the On Content Change Update (not Change). - Tank trigger levels rewritten: - Trigger levels are now specified using absolute level instead of percentage. - Legacy limitations (max. 20 levels, no duplicate levels, 0% or 100%) have been removed. - Separate trigger condition has been added to avoid coding ( if(mode == falling) { ... }). - Modes risingand fallingare now called MODE_RISINGand MODE_FALLING. - Bug fix: Sometimes input and output triggers would not fire if trigger was reached precisely when flow was recalculated. - (Mass) flow conveyor now closes input/output when stopped, instead of input/output ports. - Bug fix: (Mass) flow conveyor only closed input when stopping; now closes both output and input. FloWorks 18.2.1 (6 September 2018) - Chart templates keep their shared property on save (after reset / build). - "Duplicate MTBF/MTTR" also duplicates member list. - Fixed crash when opening product table. - Added FloWorks Compartments options back to Source's On Creation picklist. - Added missing ItemToFlow and FlowToItem triggers. - Fixed exception when starting / ending impact factor event in upgraded model. FloWorks 18.2.0 (10 August 2018) For FlexSim version 18.0.x use FloWorks 18.0.2. For FlexSim version 18.1.x use FloWorks 18.1.1. - All features and bug fixes in FloWorks 18.1.1. In anticipation of some major changes to FloWorks 2019, this release of FloWorks does not include any new features or bug fixes. FloWorks 18.1.1 (18 April 2018) For FlexSim version 18.0.x use FloWorks 18.0.2. - Bug fix: "Change" field should be hidden on all Task Executers except Flow Task Execture. - Bug fix: Changing configuration of Flow Task Executer would cause strange behavior because the window was closed before all scripts had finished executing. - Bug fix: Some objects like Berth and Loading Arm no longer worked because code properties got untoggled. FloWorks 18.1.0 (9 April 2018) For FlexSim version 18.0.x use FloWorks 18.0.2. - Pumps, valves, blenders and splitters are now different shapes of the same "Flow Processor". - Truck loading points and Berths are now different shapes of the same "Loading Point". - Cylindrical tanks, rectangular tanks, tanks with polygon base area and flow piles are now different shapes of the same "Flow Tank". - Added button to flow tank to calculate physical size based on specified max. content. - Flow pipe has animation to help visualize if material is flowing and how fast. - Added post-step delay and trigger to multi-compartment loading controller steps. - Added Start Impact Event and End Impact Event triggers. - FloWorks chart templates have been added to the library. FloWorks 18.0.2 (9 April 2018) For FlexSim version 18.1.x use FloWorks 18.1.0. - Bug fix: Conveyor sometimes stops incorrectly. - Improvement: Objects try to avoid On State Change when state has not changed. FloWorks 18.0.1 (22 December 2017) For FlexSim version 18.1.x use FloWorks 18.1.0. - Bug fix: FloWorks broke double click in the Process Flow view to open the quick library. FloWorks 18.0.0 (15 December 2017) For FlexSim version 17.1.x use FloWorks 17.1.3. For FlexSim version 17.0.x (LTS) use FloWorks 17.0.6. - Quick Properties panels added. - Added three FloWorks tutorials to user manual. - Added impact, stopand resumefunctions on FlowObject. - Added FLOW_STATE_*constants for use in stopand optionally impact. - Bug fix: In unlicensed version, license info is now correctly shown instead of “Unknown”. - All bug fixes in version 17.2.2 and 17.0.7. FloWorks 17.2.2 (15 December 2017) For FlexSim version 18.0.0 use FloWorks 18.0.0. For FlexSim version 17.0.x (LTS) use FloWorks 17.0.7. - Bug fix: FlowTank's isEmptyand isFullreturn 0 and 1. - Bug fix: Statistics collectors pick up content changes by contentproperty setter. - Bug fix: Cannot change mixer recipe while running. - All bug fixes in version 17.0.7. FloWorks 17.2.1 (18 September 2017) For FlexSim version 17.1.x use FloWorks 17.1.3. For FlexSim version 17.0.x (LTS) use FloWorks 17.0.6. - Bug fix: Adding charts using "Pin" buttons would show error message. - Bug fix: Incorrect reference in Wait For Event activity in Mixer recipe schedule template for Process Flow; disabled Repeat Schedule by default. - Bug fix: FloWorks objects remove pending events from the list when they are destroyed. FloWorks 17.2.0 (1 September 2017) For FlexSim version 17.1.x use FloWorks 17.1.3. For FlexSim version 17.0.x (LTS) use FloWorks 17.0.6. - Added FlowObjectclass interface ("dot syntax") for majority of FloWorks objects. - Can now access products by name when using FloWorks product table (e.g. source.product = "Raw material"instead of source.product = 3). - Can define recipes for products in the products table and dynamically load / execute them on Flow Mixers. - Added Process Flow templates for mixers executing production schedule and flow tank with cleaning / certification. - Updated Pin to Dashboard buttons to use Statistics Collectors to collect data. - Revised most of the user manual (tutorials are missing, will be re-released in a future version). - Bug fix: Calling SelectFlowIpand SelectFlowOpwith multiple ports produced FlexSim error. - Bug fix: When using a product table, sometimes the Product dropdown would not show the correct product when opening Properties. - Bug fix: Flow Conveyor would not suspend correctly when output flow restricted. FloWorks 17.1.3 (1 September 2017) For FlexSim version 17.1.x use FloWorks 17.1.3. For FlexSim version 17.0.x (LTS) use FloWorks 17.0.5. - Bug fix: Calling SelectFlowIpand SelectFlowOpwith multiple ports produced FlexSim error. - Bug fix: When using a product table, sometimes the Product dropdown would not show the correct product when opening Properties. - Bug fix: Flow Conveyor and Mass Flow Conveyor can cause FlexSim to crash, when used in combination with a product table. - Bug fix: Flow Conveyor would not suspend correctly when output flow restricted. FloWorks 17.1.0 (April 11, 2017) This version of FloWorks supports FlexSim version 17.1.0. - Added a multi-compartment loading controller to allow multiple tanks on the same Task Executer to be loaded in sequence and/or in parallel. - Added the mass flow conveyor, an accumulating version of the Flow Conveyor. - Instead of using numeric product IDs, you can now pre-define a product table in your model, with fixed product names and colors. See the Products page for more information. - As of now, FloWorks license versions will need to be upgraded with every release, similarly to your FlexSim license. (Existing users will automatically be requested to upgrade their license using the Request Upgrade button in the FlexSim License Activation window.) - All bug fixes included in version 17.0.2, see below. FloWorks 17.0.7 (15 December 2017) For FlexSim version 18.0.0 use FloWorks 18.0.0. For FlexSim version 17.2.x use FloWorks 17.2.2. - Bug fix: FloWorks objects remove pending events from the list when they are destroyed. - Bug fix: Fixed an issue in the LP solver. - Bug fix: Minor fix to internal treenode naming on reset. - Bug fix: Avoid a rounding issue in utilization calculation. - Bug fix: Mixer correctly resets visuals to empty in manual mode. - Bug fix: Can start mixer recipe from Reset or On Empty trigger. - Bug fix: Fixed reset error when opening model without flow trucks in a FlexSim installation without FloWorks. FloWorks 17.0.6 (1 September 2017) For FlexSim version 17.2.x use FloWorks 17.2.0. For FlexSim version 17.1.x (LTS) use FloWorks 17.1.3. - Bug fix: Calling SelectFlowIpand SelectFlowOpwith multiple ports produced FlexSim error. FloWorks 17.1.2 and 17.0.5 (June 28, 2017) For FlexSim version 17.1.x use FloWorks 17.1.2. For FlexSim version 17.0.x (LTS) use FloWorks 17.0.5. - Bug fix: Fixed errors when loading FloWorks with other modules dependent on Process Flow, such as the Emulation module. FloWorks 17.1.1 and 17.0.4 (May 25, 2017) For FlexSim version 17.1.x use FloWorks 17.1.1. For FlexSim version 17.0.x (LTS) use FloWorks 17.0.4. - Bug fix: Fixed exception when changing to first product of product table using scripting - Bug fix: Flow To Item releases item when pulling object downstream is unblocked - Bug fix: Flow To Item now uses item.Typelabels instead of deprecated setitemtype. - Bug fix: Flow Conveyor with multiple inputs sometimes created too many update events. - Bug fix: Flow Conveyor now allows input port at end of conveyor. - Usability improvement: Pick list options updated to dot syntax, e.g. centerobject(current, 1)is now current.centerObjects[1]. - Usability improvement: Changed object list dropdown to be more descriptive about input, output and center connections. - Usability improvement: Changed input/output amount trigger template to increase current input/output instead of previous trigger amount - Usability improvement: Added FloWorks options to User Event code dropdown. FloWorks 17.0.2 (April 11, 2017) This version of FloWorks supports all LTS releases of FlexSim 17 (FlexSim versions 17.0.x). - Bug fix: Code headers correctly use Object instead of treenode for current and item so pick list items like "Object connected to center port" work again. - Bug fix: Utilization no longer reported as -100% for object with maximum flow set to 0 on reset. - Bug fix: Flow control no longer breaks down indefinitely once run with one connected object. - Bug fix: Fixed exception when copy/pasting object with Flow Arrows enabled. - Bug fix: Added missing icons for options in FloWorks submenu of Toolbox. - Bug fix: Flow Conveyor now correctly detects changes in ratio of incoming components where total flow stays the same. - Bug fix: ChangeTeEdgeSpeed command no longer throws exception when used on Task Executer not attached to Travel Network. - Bug fix: Berth and loading point clear their contents on reset, like all Fixed Resources do. FloWorks 17.0.1 (January 23, 2017) - Bug fix: Missing names and items in Flow Item Bin fixed. - Bug fix: Properties window for Flow Tank flow item now opens correctly. - Bug fix: Resolved some minor issues with "Add to dashboard" pins. FloWorks 17.0.0 (December 22, 2016) - FloWorks updated for FlexSim 2017 - Flow Mixer now has a manual mode, that will put it in Idle mode after each batch until you manually call StartMixerRecipe. - Allow Input Rate and Output Rate to be plotted versus time. - New Workability and Level Triggered Event objects added with focus in particular on modelling of ports and offshore processes. - Unfilled part of Flow Tank is no longer necessarily gray but takes the object color. - When Impact Events are active on a FloWorks object, they will show a box around the object similar to the "stopobject" behavior in standard FlexSim. The color of the box varies from red (0) through yellow to green (1) to black (infinity) depending on the impact factor of the event. - Dropdown lists for port actions improved. For example, you can now open or close a single port (OpenFlowIp c.s.), select an individual port to open, close input and open output, etc. - Increased limits on number of objects and events when running unlicensed version in FlexSim Educational version. - Bug fix: Can now add Object Groups to FloWorks charts. - Bug fix: When changing a Flow Task Executer from vessel to truck, the TE's flow tank is correctly scaled and positioned on reset. - Bug fix: The State display in the Quick Properties did not correctly show the state of the Loading Arm. - Bug fix: Arrow heads can now be dragged by holding 'X' key instead of Alt. - User commands consolidated and documentation updated: ConnectFlowObjectToOtherObjectcommand rewritten and documented. - All documentation in modules\FloWorks\help\Commands reformatted and automatically generated based on actual user commands. These help pages often provide more explanation on the parameters and their optional values, the return types, and more extensive examples than the FlexSim Command Helper pages. - Integration with the Process Flow module: - FloWorks options added to "Listener Initialized" trigger of "Wait for event" action. - Dropdown lists for port actions improved. For example, you can now open or close a single port (OpenFlowIp c.s.), select an individual port to open, close input and open output, etc. - The Item To Flow's OnItemEmptied trigger is now fired before the actual item is destroyed, so that you are still able to reference it in the trigger code. FloWorks 16.2.0 (August 25, 2016) - FloWorks updated for FlexSim 16.2.0. - Flow Utilization added (GetFlowUtilization command, charts) - OnInput and OnOutput trigger added to Pipe Triggers. - Pipe Out product event can be used in Process Flow. - Bug fix: Arrows don't work on Flow Mixer. - Bug fix: FloWorks "pin to chart" broke that functionality non-FloWorks objects - Bug fix: Model exhibited undefined behavior when balanced flow is zero for all ports. - Bug fix: Pipe's On Product In trigger fired incorrectly on Outflow Product change FloWorks 16.1.1 (July 14, 2016) - Bug fix: Sometimes flow rates would be set to zero when clicking Apply - Bug fix: Sometimes "Outflow rule" dropdown would become empty for pumps. - Bug fix: SetFlowMaxContent would sometimes misbehave in OnReset triggers. FloWorks 16.1.0 (June 23, 2016) - FloWorks now supports Process Flow! FloWorks triggers can be selected in Wait For Event and Event Triggered Source activities. New "FloWorks action" activity was added to the library. - Fixed an issue with setting maximum flow rates through interface and SetFlowRatesMaximum - Flow Content-Time diagram now supports more statistics than just content. Actual flow rates are also shown in Quick Properties. - FloWorks options in Trigger Fields are now grouped into submenus - Berths now position first ship at the front of the berth, so vessels dock front-to-back instead of back-to-front. - Fixed Shared Resource capacity field disappearing on click. - Visualization of cylindrical mixer is fixed. - Allow Item To Flow to produce 0 flow for a flow item. "Different time for Nth item" option in triggers was broken and removed. - Fixed a bug where Loading Arm would sometimes not detect end of loading (target tank full) event correctly. FloWorks 16.0.1 (April 25, 2016) - Allow changing of histogram title and number of bins during run, added axis titles - Fixed issues with "Flow Control" dropdown of all objects, including new "Not connected" option/ - Fixed issues with Shared Resource member list. - Changes to user interface: made tooltips, code editors and GUI layout more consistent. - Fixed bug with Input or Output Amount Trigger sometimes not firing. - Allow center port connections between all objects (e.g. loading arm and loading point) in both directions. - SetNewFluidRates is now deprecated and replaced by SetFlowRatesMaximum. FloWorks 16.0.0 (April 13, 2016) - Fixed some issues with statistics and quick properties panels - Added Flow Histogram chart - Fixed bug with timing of transporters travelling to berths
https://docs.flexsim.com/en/22.0/modules/FloWorks/manual/WhatsNew.html
2021-11-27T09:02:15
CC-MAIN-2021-49
1637964358153.33
[]
docs.flexsim.com
A positional. Add a Pattern..
https://docs.logicaldoc.com/en/document-metadata/barcodes/positional-barcode-templates
2021-11-27T08:56:10
CC-MAIN-2021-49
1637964358153.33
[array(['/images/stories/en/barcode/barcodes.png', None], dtype=object) array(['/images/stories/en/barcode/barcode-edit.png', None], dtype=object)]
docs.logicaldoc.com
Eventmie Pro comes with an integrated ticket scanner. Organizers & Admin can scan the event attendees tickets at event entrances directly from the website, using any mobile device or laptop with a web-camera. The ticket scanner scans QrCode on the ticket, verify if the ticket is valid, and provides an option to Check-in the attendees. Once a ticket is scanned, it can never be scanned again after Check-in. {primary} The Ticket PDF design & Ticket scanner performance has been improved. It scans tickets blazing fast. 😎 Ticket scanner requires the below things to work- {success} QrCode scanner automatically prompt to Allow Camera. After allowing the camera, it never prompts again and works seamlessly. {primary} If the browser does not prompt to Allow Camera (this happens rarely), you need to go to browser settings and manually allow the camera. Before proceeding to the Ticket scanner, let's see the Ticket PDF. Eventmie Pro generates tickets in PDF format with a unique QrCode in each. Zoom in. Admin| Organizer| Customer, all of them can download tickets from their Bookingspage. The scanning & Check-in process very smooth. 🍺 {primary} We know you know this. Please don't get offended. 😋 To scan a ticket- Scan Ticketon the header. {success} Works perfectly on iPhone. {success} Works perfectly in Android. {success} Works perfectly on Desktops. {primary} You're a Master now. ✌️ 🤝 {success} You can start using Eventmie Pro and we wish you great success. 👍
https://eventmie-pro-docs.classiebit.com/docs/1.5/bookings/ticket-scanner
2021-11-27T08:29:50
CC-MAIN-2021-49
1637964358153.33
[]
eventmie-pro-docs.classiebit.com
Source code: :source:`Lib/cgi.py` Support module for Common Gateway Interface (CGI) scripts. This module defines a number of utilities for use by CGI scripts written in Python. A(" CGI script output") print(" This is my first CGI script") print("Hello, world!"): print(" Error") print("Please fill in the name and addr fields.") return print(" name:", form["name"].value) print("() method, bytes. This may not be what you want. You can test for an uploaded file by testing either the filename attribute or the file attribute. You can then read the data()) These are useful if you want more control, or if you want to employ some of the algorithms implemented in this module in other circumstances. sys.stdin). The keep_blank_values, strict_parsing and separator parameters are passed to urllib.parse.parse_qs() unchanged. Parse input of type multipart/form-data (for file uploads). Arguments are fp for the input file, pdict for a dictionary containing other parameters in the Content-Type header, and encoding, the request encoding. Returns a dictionary just like urllib.parse.parse_qs(): keys are the field names, each value is a list of values for that field. For non-file fields, the value is a list of strings. This is easy to use but not much good if you are expecting megabytes to be uploaded — in that case, use the FieldStorage class instead which is much more flexible. Changed in version 3.7: Added the encoding and errors parameters. For non-file fields, the value is now a list of strings, not bytes. Changed in version 3.7.10: Added the separator parameter.),oo644 for readable and 0o lines:. tail -f logfilein a separate window may be useful!) python script.py. import cgitb; cgitb.enable()to the top of the script. PATHis usually not set to a very useful value in a CGI script. suexecfeature. Footnotes
https://getdocs.org/Python/docs/3.7/library/cgi
2021-11-27T08:36:22
CC-MAIN-2021-49
1637964358153.33
[]
getdocs.org
Description Scrolls to the prior page of the document in a RichTextEdit control or RichTextEdit DataWindow. For syntax specific to DataWindow controls and child DataWindows, see the ScrollPriorPage method for DataWindows in the section called “ScrollPriorPage” in DataWindow Reference. Applies to DataWindow and RichTextEdit controls Syntax rtename.ScrollPriorPage ( ) Return value Integer. Returns 1 if it succeeds and -1 if an error occurs. Usage When the RichTextEdit shares data with a DataWindow, the RichTextEdit contains multiple instances of the document, one instance for each row. When the first page of the document for one row is visible, calling ScrollPriorPage goes to the last page for the prior prior page of the document in the RichTextEdit control rte_1. If there are multiple instances of the document, it can scroll to the prior instance: rte_1.ScrollPriorPage() See also
https://docs.appeon.com/pb2021/powerscript_reference/ch02s04s674.html
2021-11-27T08:53:15
CC-MAIN-2021-49
1637964358153.33
[]
docs.appeon.com
Exploring Azure Data with Apache Drill, Now Pre-Installed on the Microsoft Data Science Virtual Machine This post is authored by Gopi Kumar, Principal Program Manager in Microsoft's Data Group. We recently came across Apache Drill, a very interesting data analytics tool. The introduction page to Drill describes it well: "Drill is an Apache open-source SQL query engine for Big Data exploration. Drill is designed from the ground up to support high-performance analysis on the semi-structured and rapidly evolving data coming from modern Big Data applications, while still providing the familiarity and ecosystem of ANSI SQL, the industry-standard query language.". Drill supports several data sources ranging from flat files, RDBMS, NoSQL databases, Hadoop/Hive stored on local server/desktop or cloud platforms like Azure and AWS. It supports querying various formats like CSV/TSV, JSON, relational tables, etc. all from the familiar ANSI SQL language (SQL remains one of the most popular languages used in data science and analytics). The best part of querying data with Drill is that the data stays in the original source and you can join data across multiple sources. Drill is designed for low latency and high throughput, and can scale from a single machine to thousands of nodes. We are excited to announce that Apache Drill is now pre-installed on the Data Science Virtual Machine (DSVM). The DSVM is Microsoft's custom virtual machine image on Azure, pre-installed and configured with a host of popular tools that are commonly used in data science, machine learning and AI. Think of DSVM as an analytics desktop in the cloud, serving both beginners as well as advanced data scientists, analysts and engineers. Azure already provides several data services to store and process analytical data ranging from blobs, files, relational databases, NoSQL databases, and Big Data technologies supporting varied types of data, scaling / performance needs and price points. We wanted to demonstrate how easy it is to setup Drill to explore data stored on four different Azure data services – Azure Blob Storage, Azure SQL Data Warehouse, Azure DocumentDB (a managed NoSQL database) and Azure HDInsight (i.e. managed Hadoop) Hive tables. Towards that end, we've published a tutorial on the Cortana Intelligence Gallery that walks you through the installation and how to query data with Drill. the tutorial that will guide you through the steps to set up connections from Drill to different Azure Data services. Drill also provides an ODBC/JDBC interface, allowing you to perform data exploration on your favorite BI tool such as Excel, Power BI or Tableau, using SQL queries. You can also query data from any programming language such as R or Python with ODBC/JDBC interfaces. While on the Data Science Virtual Machine, we encourage you to also take a look at other useful tools and samples that come pre-built. If you're new to the DSVM (which is available in Windows and Linux editions, plus a deep learning extension to run on Nvidia GPUs), we invite you to give the DSVM a try through a Azure free trial. We also have a timed test drive, available for the Linux DSVM now, that does not require an Azure account. You will find more resources to get you started with the DSVM below. In summary, Apache Drill can be a powerful tool in your arsenal, and can help you be nimbler with your data science projects and gain faster business insights on your big data. Data scientists and analysts can now start exploring data in its native store without having to wait for ETL pipelines to be built, and without having to do extensive data prep or client side coding to bring together data from multiple sources. This can be a huge boost to your teams' agility and productivity. Gopi)
https://docs.microsoft.com/en-us/archive/blogs/machinelearning/exploring-azure-data-with-apache-drill-now-part-of-the-microsoft-data-science-virtual-machine
2021-11-27T07:54:39
CC-MAIN-2021-49
1637964358153.33
[]
docs.microsoft.com
Welcome to the Mobiscroll API Reference. You can find the most up to date information about usage and customization options here. Having trouble? Ask for help. Getting Started with Mobiscroll for Javascript <script src="js/mobiscroll.javascript.min.js"></script> <link href="css/mobiscroll.javascript.min.css" rel="stylesheet" type="text/css"> 3. Add an input to your HTML markup <input id="myInput" /> 4. Initialize your component // create a datepicker with default settings mobiscroll.datepicker('#myInput'); // create a timepicker with default settings mobiscroll.datepicker('#myInput', { controls: ['time']}); // create a datetimepicker with default settings mobiscroll.datepicker('#myInput', { controls: ['datetime']}); mobiscroll = require('path/to/mobiscroll/js/mobiscroll.javascript.min'); mobiscroll.datepicker('#myInput'); require(['path/to/mobiscroll/js/mobiscroll.javascript.min'], function (mobiscroll) { mobiscroll.datepicker('#myInput'); });' });
https://docs.mobiscroll.com/5-11-1/javascript/getting-started
2021-11-27T08:01:23
CC-MAIN-2021-49
1637964358153.33
[]
docs.mobiscroll.com
Unisys Stealth¶ Introduction¶ Unisys Stealth is a network security tool for safeguarding sensitive information across shared networks. By creating pre-defined communities of interest (COIs) and curating user access to these groups, the need to create separate networks for handling restricted data is reduced. Morpheus includes a full integration with Stealth allowing administrators to create and manage COIs, work with configurations and roles, and provision new endpoints into COIs. Stealth Concepts¶ Communities of Interest (COIs): A collection of endpoints cryptographically separated so they can only communicate to each other Endpoints: Any system or device with a Stealth agent Configuration: Tells the endpoints which authorization methods and services to use for obtaining COI membership Role: Defines COI membership. Users and groups are assigned to a role which grants access to that role’s COIs Filters: Customizes Stealth communication. More specifically, filters constrain Stealth communication to specific addresses, ports, or protocols and allow Stealth endpoints to communicate with non-Stealth endpoints Integrating Stealth with Morpheus¶ Navigate to Infrastructure > Network > Integrations Click + ADD Complete the following fields: NAME: A name for this Stealth integration in Morpheus API HOST: The address for the server hosting Stealth (ex:) USERNAME: A username for the portaladmin, or the user who logs into the web console PASSWORD: A password for the account MANAGER USERNAME: A username for the manager account MANAGER PASSWORD: A password for the manager account Click ADD SECURITY INTEGRATION Summary View¶ The default view when accessing a Stealth integration in Morpheus is the Summary view. In addition to basic information about the Stealth server itself, we can see system status, license and service information. Endpoints¶ Endpoints are any system or device with a Stealth agent. Stealth endpoints can be provisioned in Morpheus in the same way other cloud resources are provisioned. With Stealth integrated, workloads provisioned on the appointed networks are assigned Stealth configuration and a Stealth role during the provisioning process. Based on a user’s assigned roles and COIs assigned to those roles, only workloads on the appointed COIs will be visible to the user. Additionally, workloads can only see other workloads within their COI. Note Machines on the same network which are not Stealth-managed will be able to see and communicate with each other but will not be able to see workloads which are assigned to a Stealth COI. Endpoints View¶ The endpoints view will display all available Stealth-managed resources. Endpoints are not created here but will be synced into this view as they are created (through Morpheus provisioning or outside creation). Stealth-managed endpoints can be deleted by clicking the trash can icon at the end of each row in this view. The following fields are exposed in the endpoints list view: DISPLAY NAME NAME DESCRIPTION Configurations¶ Configurations in Stealth are the top-level construct and house COIs, roles, groups, users and endpoints. Your Stealth integration will include at least one configuration but often they will include more. Configurations are primarily created and managed from the Stealth console but we can opt to remove them from Morpheus by clicking the trash can icon at the end of each row on the configurations list page. Configurations are selected along with a Stealth role at provision time in Morpheus. Roles¶ Users are placed into roles which are associated with COIs. A user’s role(s) determine which COIs he or she will be able to see and access. Roles are synced into Morpheus once the integration process is complete and can be viewed in the Roles list view. Roles can also be created from the Morpheus integration as described later in this section. Roles View¶ The following fields are exposed in the roles list view: NAME: The name of the role DESCRIPTION: A description value for the role Note More detail on each item in the roles list can be revealed by clicking on the (i) icon in each row, including the COIs associated with the role. Creating Stealth Roles¶ Navigate to Infrastructure > Network > Integrations > (Your Stealth integration) > Roles Click + CREATE ROLE Complete the following fields: NAME: The name for the new role DESCRIPTION: A description value for the new role CONFIGURATION: Select an existing Stealth configuration to associate with the role ROLE TYPE: Identifies how the role is used. Can be Global (for roles used to isolate endpoints and users), Service (for roles used by endpoints in service mode to access an authorization service) or WorkGroup (for roles used by endpoints in normal operation) FILTER SET: Choose a filter set to apply to the role to allow or deny clear text communication with non-Stealth-managed endpoints COIs: Select the COIs to be associated with the role PROVISION CHANGES: Click ADD ROLE COIs (Communities of Interest)¶ COIs exist within configurations and create a logical separation between endpoints in separate COIs. Communication between endpoints in the COI is encrypted and those outside the COI are unable to see or access endpoints despite being on the same network. On completing the integration, Morpheus will sync in existing COIs. COIs can also be created from Morpheus UI which is described later in this section. COIs are deleted by clicking on the trash can icon at the end of each row in the COIs list view. COIs View¶ The following fields are exposed in the roles list view: NAME: The name of the COI DESCRIPTION: A description value for the COI Creating Stealth COIs¶ Navigate to Infrastructure > Network > Integrations > (Your Stealth integration) > COIs Click + CREATE COI Complete the following fields: NAME: The name for the new COI DESCRIPTION: A description value for the new COI TYPE: Workgroup or Service DIRECTION: Default (enables COI to accept inbound and initiate outbound tunnels), Initiate (restricts the COI to only initiate outbound tunnels), or Accept (restricts the COI to only accept inbound tunnels) Click CREATE COI Filters¶ Filters customize Stealth communication. More specifically, filters constrain Stealth communication to specific addresses, ports, or protocols and allow Stealth endpoints to communicate with non-Stealth endpoints. Filters are synced into Morpheus when integrating with Stealth and are viewable from the filters list view. They are created and managed from within the Stealth console itself. When accessing the filters list view, all filter sets are displayed. Each filter set can be expanded to view the individual filters within. Information on each filter is displayed once the filter set has been expanded to reveal the individual filters. Provisioning with Stealth¶ In order to provision new Stealth-managed endpoints, Stealth must be integrated with Morpheus as described above. In addition, Stealth must be selected as the Security Server for the cloud you’re provisioning into. Security servers can be selected at the time a new Cloud integration is created or by editing an existing Cloud integration. Choosing a Cloud Security Server¶ Assuming the Cloud is already integrated with Morpheus, use the steps below to set the security server and activate Stealth prompts at provision time. The steps to set the security server during the time the cloud is initially integrated with Morpheus is very similar. Navigate to Infrastructure > Clouds > (Your Selected Cloud) Click EDIT Click on Advanced Options to reveal additional selections In the dropdown for SECURITY SERVER, choose an existing Stealth integration Provisioning to a Stealth-enabled Cloud¶ Once we have selected our Stealth integration as the security server for at least one Cloud in Morpheus, new Instances (endpoints) can be provisioned and managed by Stealth. Navigate to Provisioning > Instances Click + ADD Select the Instance Type, Cloud, and Group making sure to choose a Cloud that has been set up for an existing Stealth integration On the Configure tab of the provisioning wizard, choose a Stealth Configuration and a Stealth Role according to the needs of the machine(s) being provisioning Once the provisioning process is complete, the new Stealth-managed endpoints will be available and restricted based on the Stealth implementation
https://docs.morpheusdata.com/en/5.2.4/integration_guides/Networking/stealth.html
2021-11-27T09:06:16
CC-MAIN-2021-49
1637964358153.33
[array(['../../_images/add_stealth.png', '../../_images/add_stealth.png'], dtype=object) array(['../../_images/stealth_summary.png', '../../_images/stealth_summary.png'], dtype=object) array(['../../_images/add_role.png', '../../_images/add_role.png'], dtype=object) array(['../../_images/create_coi.png', '../../_images/create_coi.png'], dtype=object) array(['../../_images/provision_endpoint.png', '../../_images/provision_endpoint.png'], dtype=object)]
docs.morpheusdata.com
KPI Options Summary its own KPI Option tokens. These can be backed by any approved ERC-20 token and can be valued against any KPI that a project wants to improve. #Why Should DAOs use UMA KPI Options? The core function of KPI Options is that it aligns the incentives of the community with the underlying fundamentals of the protocol. The community succeeding and the protocol succeeding should be one and the same. Traditional airdrops of liquid tokens can fuel network growth but can result in increased sell pressure on the token price. It is difficult to predict the effect of these airdrops, and the risk of dumps makes airdrops impractical for projects with tokens already in circulation. Instead, KPI Options are synthetic tokens that will pay out more rewards if the KPI grows to predetermined targets before a given expiry date. Every KPI option holder has an incentive to grow that KPI because their option will be worth more. This aligns individual token holder interests with the collective interests of the protocol. Some examples of KPI Options that can be created on the UMA platform: - TVL Options: for DeFi protocols, these options pay out more project tokens as TVL goes up. Option holders are united in growing protocol TVL. - Volume Options: for exchange protocols, these options pay out more project tokens as trading volume increases. Option holders are united in growing volume metrics. - DAU Options: for dapps, these options pay out more rewards as DAU numbers go up. Option holders are united in growing dapp/protocol usage. #KPI Options Example As an example, let's assume the UMA treasury has decided to create a KPI Option to incentivize its community to help improve important metrics to the protocol. The first step in the process is to define a target KPI to incentivize. After going through various metrics, UMA decides to create a contract (UMA-TVL-1221) with payouts based on the UMA TVL using a target date of December 31, 2021. UMA allocates 10,000 $UMA to the KPI Option contract that pays out a specified number of $UMA based on the TVL locked across all UMA contracts to liquidity providers. The payout structure can be customized for specific payout intervals. For simplicity, we will design the UMA-TVL-1221 contract payout logic to have bounds between $100 million and $1 billion: - If the UMA TVL is less than $100 million the payout would be 0.1 $UMA and if the UMA TVL is greater than $1 billion the payout would be 1 $UMA. - If the UMA TVL is between $100 million and $1 billion, the payout would be directly comparable to the UMA TVL at expiry. For example, if the UMA TVL expires at $250 million, each KPI Option would be worth 0.25 $UMA.
https://docs.umaproject.org/kpi-options/summary
2021-11-27T09:33:02
CC-MAIN-2021-49
1637964358153.33
[]
docs.umaproject.org
Azure Redis Cache on ASP.NET Core The on distributed servers, allowing the cache to be shared to more than one server. Azure Redis uses the open source framework Redis to implement the distributed cache. To learn more, please take a look at the entire post at:
https://docs.microsoft.com/en-us/archive/blogs/pfelatam/azure-redis-cache-on-asp-net-core
2021-11-27T09:09:45
CC-MAIN-2021-49
1637964358153.33
[]
docs.microsoft.com
... In addition, members shall permit Eduserv OpenAthens to retain and publish the metadata described in the Federation Metadata section of this documentation. Members shall permit Eduserv OpenAthens to publish members names for the purpose of marketing the OpenAthens Federation. Eduserv OpenAthens retains the authority to administer the Federation in its aims, objectives and rules as may from time to time be published. ... Overview Content Tools
https://docs.openathens.net/pages/diffpagesbyversion.action?pageId=4128963&selectedPageVersions=3&selectedPageVersions=4
2021-11-27T08:50:39
CC-MAIN-2021-49
1637964358153.33
[]
docs.openathens.net
. Transform: Apply a COUNTA function on the source column: derive type:single value:COUNTA(Val) as:'fctnCounta' Apply a COUNTDISTINCT function on the source: derive type:single value:COUNTDISTINCT(Val) as:'fctnCountdistinct' Results: Below, both functions count the number of values in the column, with COUNTDISTINCT counting distinct values only. The empty value for r007 is counted by both functions. This page has no comments.
https://docs.trifacta.com/display/r051/EXAMPLE+-+COUNT+Functions
2021-11-27T07:58:17
CC-MAIN-2021-49
1637964358153.33
[]
docs.trifacta.com
Contents: Contents: Before you begin performing analytics on a dataset, it is important to identify and recognize outlier data patterns and values. Unusual. NOTE: Analysis of trends and outliers across multiple columns requires different techniques. See Analyze across Multiple Columns. Single-column outliers For assessing anomalies in individual columns, Trifacta Wrangler provides visual features and statistical information to quickly locate them. Data Histogram. Trifacta Wrangler.
https://docs.trifacta.com/pages/viewpage.action?pageId=177696678&navigatingVersions=true
2021-11-27T07:52:24
CC-MAIN-2021-49
1637964358153.33
[]
docs.trifacta.com
.PACEsetting, migrateor as part of migrations and remove them as part of a flushmanagement=Falsecontains a ManyToManyFieldthatset as needed) and use the ManyToManyField.throughattributeand create a copy of an existing model. However, there’s a better approach for that situation: Proxy models. order_with_respect_to¶ Options. order_with_respect_to¶ Makes this object orderable with respect to the given field, usually a ForeignKey. This can be used to make related objects orderable with respect to a parent object. For example, if an Answerrelates to a Questionobjectfield ascending, use this: ordering = ['pub_date'] To order by pub_datedescending, use this: ordering = ['-pub_date'] To order by pub_datedescending, then by authorascending,).".
https://docs.djangoproject.com/en/1.8/ref/models/options/
2018-03-17T06:15:02
CC-MAIN-2018-13
1521257644701.7
[]
docs.djangoproject.com
Support for Other Languages¶ Here are the ports I’m currently aware of: - C++: dvirtz and Greg Wicks <>`__ - C#: ffleischer and Taylor Finnell - Go: Lari Rasku - Java: Jens Villadsen and Nick Martin - Javascript: Lari Rasku. There’s also my Google Music Turntable uploader; it’s not a port, but may be useful as an example. - Node: Jamon Terrell - Objective-C: Gregory Wicks - PHP: raydanhk - Ruby: Loic Nageleisen They’re in various states of completion and maintenance because, well, building a port is tough. Alternatively, consider using GMusicProxy or copying its approach. Building a Port¶ Get in touch if you’re working on a port. I’m happy to answer questions and point you to relevant bits of code. Generally, though, the protocol package is what you’ll want to look at. It contains all of the call schemas in a psuedo-dsl that’s explained here.
http://unofficial-google-music-api.readthedocs.io/en/latest/ports.html
2018-03-17T05:57:45
CC-MAIN-2018-13
1521257644701.7
[]
unofficial-google-music-api.readthedocs.io
Concrete-python Documentation¶ Concrete-python is the Python interface to Concrete, a natural language processing data format and set of service protocols that work across different operating systems and programming languages via Apache Thrift. Concrete-python contains generated Python classes, utility classes and functions, and scripts. It does not contain the Thrift schema for Concrete, which can be found in the Concrete GitHub repository.
http://concrete-python.readthedocs.io/en/stable/
2018-03-17T06:18:26
CC-MAIN-2018-13
1521257644701.7
[]
concrete-python.readthedocs.io
Worldnet provides the ability for a Secure Card token stored under one terminal ID to be used for payments on other terminals IDs. This is limited to Terminal IDs that are under merchants that are configured to be within the same “Merchant Portfolio” (a merchant portfolio is a group of merchants in our system). In this scenario, the Secure Card is registered under one terminal ID as normal (this is called the “Terminal Holder” terminal, or the parent terminal of the token) and a list of permitted terminal IDs is sent in the registration request. If successfully registered then all the other terminal IDs will be able to process payments on that token or search for the token. Only requests to the terminal holder (parent) terminal ID will be able to update or delete the token though.
http://docs.worldnettps.com/doku.php?id=developer:integrator_guide:6._secure_card_storage:6.1._merchant_portfolio_secure_cards&ref=sb
2017-01-16T14:53:46
CC-MAIN-2017-04
1484560279189.36
[]
docs.worldnettps.com
Scope This document details how to integrate to the Worldnet sandbox account. It requires familiarity with the basics of integrating with our hosts, detailed in our Integrator Guide. It is not intended to offer complete testing guidelines. For these please see the Testing Guide. What is a Sandbox account A sandbox account is used to test basic integration with our payment gateway. These accounts (one per currency) are publically available on our test host and can be used without requesting access from Worldnet. They provide very basic functionality and no access to the back-end ‘Merchant Selfcare System’ What can you test? [email protected]. In order to test a Hosted Payment Page customization you need to obtain access to a full test account first. Please note that 3D Secure cannot be tested and it works only with live environment and live credit cards. Account Details. Test cards and responses Available test cards are listed below:
http://docs.worldnettps.com/doku.php?id=developer:integration_docs:sandbox_testing&ref=sb
2017-01-16T15:03:26
CC-MAIN-2017-04
1484560279189.36
[]
docs.worldnettps.com
General questions Installation Q: Should I install the program on a server or on a workstation? A: Both a server and a workstation can run Total Software Deployment., and also deploy to them, Storages or move the program to another computer? A: The Storages are located in separate folders (file system directories). The Network storage can be located by right-clicking the Storage root group and selecting Show in Explorer. Then go up one level and copy/archive the whole storage folder. The Software storage can be located by right-clicking any software in the Storage and selecting Show in Explorer. Then go up two levels and copy/archive the whole Storage folder. Program settings can be backed up by copying/archiving a folder entitled Total Software Deployment in your account's Application Data folder (referred to by %APPDATA% environment variable), if you chose Install for me during the program installation. If you chose Install for all, the settings are stored in "C:\Documents and Settings\All Users\Application Data\Total Software Deployment" (Windows 2000/XP/2003) or "C:\ProgramData\Total Software Deployment" (Windows Vista/7/8/10/2008/2012). You can also find this folder by clicking Open tasks folder in the Scanner tab. To restore the program, install it on another computer (but don't run it) and extract your backed-up settings to the Total Software Deployment). Network storage Q: Is it possible to use the same Network storage in both TSD and TNI 3? A: Yes, it is. The Network storage is fully compatible with the TNI 3 storage. It's also possible to use the same network storage in TSD and TNI 3 at the same time, as both programs will detect storage changes and update information. However, TSD will only display and allow to modify Windows nodes. Q: Is it possible to look up which software versions are installed on computers in the Network storage? A: Yes, it's possible. Please use the Assistant. Detailed information on how to use the Assistant can be found in this section. Software storage Q: What should I do if the installer consists of more than one file? A: Please see the following section: Software tree - Altering the Software storage structure - Adding software. Selecting a method for recording & deployment Silent Q: When should I use the Silent installation method? A: Most modern installation packages support the silent installation mode. In this mode programs install without user interaction: all processes perform automatically. This mode is enabled (in most cases) by adding parameters to the command line of the executable. Setting a few parameters may be required to achieve the desired result. Silent Installation is the most preferable method to use. Q: In which cases is it not possible to use the Silent method to create deployment packages? A: Most modern installation packages support the silent installation mode, yet there are exceptions: - Online downloaders may have parameters that allow the downloader to operate without user intervention, but at the same time the downloadable installation package is either run without any options or the downloader’s parameters are not compatible with it; - Self-extracting archives may have parameters that allow the downloader to extract the contents without user intervention, but at the same time they may not be designed to make the installation package run with the required parameters; - Installation packages where silent installation is either not supported or intentionally disabled during package creation. Q: Could TSD incorrectly determine the type of the installer, and, when TSD does determine the type correctly, could the silent installation keys still fail to be compatible (installation requiring user interaction)? A: Yes, it is possible. In order to verify that the type of the installer has been determined correctly, you must use the Test run (local) option. If the installer requires user interaction to install a program, then the specified parameter package is not compatible with the installer. Q: What should I do if TSD could not determine the type of the installer automatically, but I know either the type of the installer or which parameter to use for the silent installation? A: In the former case, you can manually select the type of the installer from the list, and then TSD will provide the necessary parameters for the silent installation. In the latter case, input the silent installation parameters manually. In any case, we recommend you use Test run (local) to ensure successful deployment. Q: Can I create a deployment package if my installer installs silently without any parameters? A: Yes, you can. In this case, you should use the Use empty command line option. Then TSD will not add any parameters to the command line of the executable when deploying remotely. Q: What should I do if I selected the type of the installer manually, and now I cannot recall what type was initially determined by TSD? A: You can use the Redetermine the installer type button, then Set default command line for the silent install. The program will redetermine the installer type and offer you a minimal parameter string for silent installation. Q: Why do you recommend not to execute the installation package from a batch file? A: It's not prohibited, but because of the difficulties in tracking the execution status of such a package, the information about the deployment process will often be wrong, and we cannot guarantee that this package will be deployed successfully. Q: What should I do if I need to execute a few CMD commands before and after the installer? A: Create a new deployment package with a batch file, add the installer and, if necessary, another batch file as Add-ons. For more information, see Add-ons. Q: What should I do if I need to execute a few CMD commands before and after the installer, but the installer is multi-file? A: Create two deployment packages: one with a batch file, the other with the multi-file installer. If necessary, add a batch file to the 2nd package as an Add-on. Before deployment, add the 2 packages to the Software deployment list in the correct order in which they should be executed on the remote computer. For more information, see Add-ons. Q: What should I do if I can’t use the Silent method to create a deployment package? A: Try using other methods offered by TSD (Macro, Sysshot). Macro Q: When should I use the Macro method? A: This method is suitable for most software with a standard installation wizard. Q: In which cases is it not possible to use the Macro method? A: Software vendors may develop their own installer, also using their own controls, which can imitate the look and behavior of a number of standard controls. The macro will not recognize the changes in such controls. Also, ads may be displayed in the installer. They may change over time and cause problems during deployment. TSD keeps track of user's interaction with such control elements and displays the following error message: "During the macro recording you have interacted with nonstandard control(s) which are not compatible with the Macro method". In such a case, remote deployment will be impossible. Q: What should I do if, after using the Macro method, TSD displays a message that interaction with non-standard control elements has occurred? A: Try to create the deployment package again without interacting with such controls. If it’s not possible, try another method. Q: When do I have to select the Macro method? A: This method has no significant advantages over Silent and is only recommended for use when, for whatever reason, silent installation is impossible. Q: What should I do if I can’t use the Macro method to create a deployment package? A: You can always try using other methods offered by TSD. Sysshot Q: When can I use the Sysshot method? A: This method is suitable for small software. We recommend using this method only if you’re an advanced user and when the other two methods cannot be used. Q: In which cases is it not possible to use the Sysshot method? A: You are strongly discouraged from using this method for deployment of drivers, codecs, system utilities and libraries. Q: Is it possible that a package recorded using Sysshot and deployed remotely will not work? A: Yes, it’s possible. Sometimes, if the target system architecture is different from the architecture of the system where the deployment package was created, conflicts may appear. This occurs due to some differences in the registry structure between x64 and x86 architectures.SD’s next launch or immediately, if TSD is running) or imported by using theStorage main menu or any group's context menu. - Computers are not in Port numbers Q: How can I find which port numbers are used by TSD, so that I can configure the firewall? A: TSD uses the SMB protocol to scan Windows computers. It can be allowed by enabling the File and Printer Sharing exception in the Windows Firewall or TCP port 445 in other firewalls. You could also enable TCP port 139 (NetBIOS) for older systems. Windows Firewall in Windows Vista, 7 or newer has a special exception entitled Windows Management Instrumentation (WMI) which can be enabled and thus save you from the necessity of setting up the policies up following can I fix the "Call was canceled by the message filter" error? the WMI diagnosis utility from Microsoft; - Follow these tips to repair WMI on the remote computer. Deployment questions Errors when adding deployment tasks Q: How can I resolve the error: "application has no bitness specified"? A: This message will appear if software bitness has not been set when creating a software deployment package. Go to the Software editor and set bitness in the passport. More information about bitness can be found in the Program bitness section. Q: How can I resolve the error: "The [silent|macro|sysshot] file is of unknown version"? A: This error occurs when a package created in a newer version of TSD is being deployed using an old version. To solve the problem, update to the latest version of TSD. On the other hand, new versions of TSD support older version packages. Q: How can I resolve the error: "The recorded macro file contains interactions with controls not compatible with the macro"? A: The error is caused by user interaction with a control incompatible with the Macro method. You can see the incompatible control in the Macro editor: it will be highlighted on the screenshot. If it's possible, try re-recording the method without using this control; if the error doesn't disappear, then it's most likely that this installer is not supported by the Macro method. Try using other deployment methods: Silent or Sysshot. Q: How can I resolve the error: "Some parameters string(s) in the silent method have not been filled"? A: The error occurs when a Silent package is added to the Software deployment list, and one or several parameter strings are not set. Open this package in the Software editor and make sure the parameter fields are filled in for the software and any add-ons. Tick Use empty command line for each software or add-on which do not require any parameters. Q: How can I resolve the error: "this asset has neither a network name nor an IP address specified" or "this asset has no IP address specified"? A: One of the nodes moved to the Deployment targets list has no IP address set. In Options, set the Handling of dynamic IP addresses setting to option #1 or #2, then ping the computer and make sure it's the correct deployment target. Deployment errors Q: I use TNI 3 storage, and when trying to deploy to one of my scanned nodes, I get the following error: "Remote service manager error: Access is denied". What's the problem? A: This problem may occur if you're using Active Directory. TNI 3 uses 2 protocols to scan the network: SMB and RPC. SMB is the principal method of scanning. However, if an access error occurs, TNI will scan using the backup option that is RPC, for which having domain user privileges will be sufficient. Deploying software using the RPC protocol is impossible, therefore RPC scanning is disabled. The same SMB protocol is used for deployment, but domain administrator privileges are required. Q: How can I resolve the error: "For correct deployment, the target user must be logged in on the remote computer and his session must be active"? A: This error occurs because environment of the user for which they are installed is necessary for normal installation. If you run the installer as another user or as System, there may be issues during software installation: shortcuts missing from the desktop, failure to start (for another user) or installer errors during deployment (when run as System). Therefore, the installer should be run as current user when deploying Silent or Macro packages to remote computers. Q: How can I resolve the error: "Creating remote service error: The specified service has been marked for deletion" or "Creating remote service error: Overlapped I/O operation is in progress"? A: The main causes of such issues: - Opened Process Explorer (SysInternals); - Opened Task Manager; - Opened MMC; - Opened Event Viewer; - An instance of Visual Studio using the debugging service. If you cannot accurately determine the cause, we advise to restart the target node and repeat the deployment. Q: How can I resolve the error: "While copying file an error has occurred"? A: This error occurs because the installer process was not shut down when this software was last deployed to the same computer, and presently TSD cannot copy the installer to the temporary directory on the remote computer because the installer file left from the previous deployment is busy. We recommend either rebooting or remotely connecting to the computer in order to kill the installer process. Q: How can I resolve the error: "Silent installation was terminated due to timeout. The command line parameters or the timeout value may be inappropriate"? A: This error occurs when the deployment of the software did not complete within the allotted timeout. Here are the possible causes and how to deal with them: - The command line parameters used are unsuitable for silent installation in this case. To make sure the parameters are correct, first perform a Test run (locally). - The target computer is low-power and/or under heavy load, which slows the software deployment, and the specified timeout is not enough to complete the deployment. To solve this problem, when setting a timeout, consider the possible scenarios that could affect deployment on the target computer. - External factors on the target computer – such as no Internet connection or absence of a system package (VC++ Redistributable, .NET Framework, etc) – will impede deployment even when the parameters are correct. To resolve this issue, contact tech support for the software and find out what packages are required for installation. More about waiting for the installation process to complete: in the Timeout section here. Q: What can I do if TSD reports that software deployment using the Silent method has completed successfully, but in reality the software was not installed on the target computer? A: TSD monitors installation progress, however such a scenario could happen if the installer process finished correctly, but has not in fact installed anything. Possible causes of this include loss of Internet connection, absence of a system package (VC++ Redistributable, .NET Framework, etc) and an error in command line parameters. To resolve the problem, perform a Test run (locally) from the Software editor, and then, if the problem is still not evident, try running the installer manually on a remote computer both without CMD parameters and with parameters set in TSD, and see if there's a difference. Q: How can I resolve the error: "The configuration.xml file was not found. Please check file existence"? A: This error can occur when deploying MS Office to a remote node, but the configuration file is missing from the Software storage. To resolve the problem, edit the Office package (the file will be automatically created once the editor is opened), modify the configuration file if necessary and repeat deployment. Q: Why didn't my Macro package deploy to the remote computer? It was recorded correctly! A: This scenario is possible if the program you're trying to deploy is already installed there, and the installer may be offering you to uninstall the program instead. Also, in another environment, the installer may have a different set of steps. Thus, certain steps in which actions were recorded may be missing during playback, or new unrecorded steps may appear. Q: What can I do if a package was recorded correctly using the Macro method, but the following error occurs: "Cannot find the installer window. You can take a look at the last screenshot of the installer window"? A: Deployment history will in most cases contain a link to a screenshot of the installer screen when the macro ceased playback (Deployment log will also contain the link.) The same entry will also contain a Software editor link to the action that stopped the playback. If the package is rerecorded, those links will become obsolete and be deleted. To accurately determine the reason why the necessary screen cannot be found during playback, compare the screenshot taken during the deployment with the one taken during the recording. Q: How can I resolve the error: "Installer process(es) terminated due to timeout."? A: This error will occur during deployment using the Macro method if the macro playback is over, but the installer process remains running until the 10 minute timeout has elapsed. Such a scenario will most often occur if the macro was recorded on a computer where the same program is already installed or if installer processes are monitored incorrectly according to its settings. For more information about waiting for installer child processes, see Monitoring installer processes in the Macro section. Q: How can I resolve the error: "Cannot find the installer window. The installer process on the remote computer no longer exists, therefore, it's not possible to obtain a screenshot of the installer's last screen"? A: This error will occur during deployment using the Macro method if the installer has closed before playing back all the recorded actions. To resolve this problem, make sure that the software is compatible with the target operating system and that the same steps are needed to install the software on the target machine as on the one where the package was recorded. Q: How can I resolve the error: "Services are non-interactive on the remote computer; therefore, deployment of MSI files using the Macro method is impossible"? A: Microsoft Installers have a client-server structure. MSI Installer Client is responsible for the user interface and for collecting information through user interaction and Server is directly responsible for installation. When services are non-interactive during remote installation, MSI Client considers itself incapable of drawing the interface and closes immediately, and therefore the TSD service won't find the expected installer window to interact with. In order to deploy MSI files, use the Silent method or enable interactive services on the remote computer. In order to make services interactive, do the following: - Open regedit on the remote computer and navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Windows; - Change the NoInteractiveServices registry key to 0. Q: What can I do if the following warning appeared during deployment: "TSD Service is already running. Abort?"? A: One possible cause for this warning is the TSD service which may still be running on the target computer following previous deployment because the installer process itself is still running. In order to continue with the current deployment, the hung TSD service has to be stopped. Another cause could be current software deployment to the same computer using another copy of TSD. In this situation, you must wait until the deployment is complete. Java Installer deployment Q: I'm trying to deploy Java using the configuration file, but it comes to nothing. Am I doing something wrong? A: A number of errors in Java Installer may prevent normal deployment of Java Runtime Environment and Java Development Kit on target computers. One problem is with the INSTALLCFG command line parameter which only accepts the absolute path to the configuration file. So, it accepts neither a network path nor a relative path (i.e. if the configuration file is in the installer folder). The best solution available at this time is provided below: - Create a batch file with the following content:>"%CommonAppData%\Oracle\Java\java.settings.cfg" - Before adding the batch file into TSD, place it into a separate folder and copy the Java installation configuration file into this folder. - Also add a command to the batch file to copy the configuration file into an existing folder on the target computer. Keep in mind that the configuration file will be copied along with the batch file to the target computer during deployment; i.e. both files will be placed in the same folder. - Add the batch file to TSD as a multi-file installer; select the folder that contains the two files. - In the Software editor, add the Java installer as an add-on, and set the path to the folder containing the configuration file (i.e. the folder where the configuration file will be copied by the batch file) in the command line parameter INSTALLDIR. - Add another batch file as an add-on to delete the configuration file that was copied. Q: I've added parameters from the configuration file to the Java command line, but during deployment nothing happens on the target computer even though TSD reports successful deployment. What can be done? A: Due to an error in Java installer, when it's run as System (and that is the way during remote deployment), it cannot create the temporary configuration file. See the solution: - Add a batch file with the following content to TSD: AUTO_UPDATE=0>"%CommonAppData%\Oracle\Java\java.settings.cfg" - In the Software editor, add the Java installer as an add-on. Q: Can I uninstall an old version of Java using your program? A: Yes, however, it will be necessary to find out the name of the old version of Java on the target computer. Create a batch file similarly to the example and specify the exact name of the version that should be uninstalled between the single quotation marks: wmic product where "name = 'Java 8 Update 66'" call uninstall /nointeractive wmic product where "name = 'Java SE Development Kit 8 Update 66'" call uninstall /nointeractive Preparing to deploy MS Office Click-to-Run Q: I have a box version of MS Office 2013 (2016). How can I deploy it? A: To deploy retail editions of MS Office, follow these steps: - Copy disc contents (distribution folder) to the computer. - Download Microsoft Office Deployment Tool for your version of Office from the official website. - Extract officedeploymenttool.exe. - Place the extracted contents into the MS Office distribution folder replacing setup.exe. - In TSD, add setup.exe to the Software storage as a multi-file installer by ticking the checkbox and specify the path to the Office distribution folder. - For information on how to further setup and deploy MS Office, see the Configuration files for installers section and also the hint in the Software editor. Q: I'm missing the Click-to-Run executable, MS Office 2013 (2016) distribution and Microsoft Office Deployment Tool. What can I do? A: To obtain the required files for Click-to-Run deployment, do the following: - Download Microsoft Office Deployment Tool for your version of Office from the official website. - Extract officedeploymenttool.exe into an empty folder. - Add extracted setup.exe to the Software storage as a multi-file installer by ticking the checkbox and specify the path to the folder containing the files extracted from officedeploymenttool.exe. - For information on how to further setup and deploy MS Office, see the Configuration files for installers section and also the hint in the Software editor.
http://docs.softinventive.com/tsd/faq
2017-01-16T15:02:23
CC-MAIN-2017-04
1484560279189.36
[]
docs.softinventive.com
6.9 API for Relative Paths The Racket installation tree can usually be moved around the filesystem. To support this, care must be taken to avoid absolute paths. The following two APIs cover two aspects of this: a way to convert a path to a value that is relative to the "collets" tree, and a way to display such paths (e.g., in error messages). 6.9.1 Representing Collection-Based Paths The cache argument is used with path->pkg, if needed. 6.9.2 Representing Paths Relative to "collects" The path argument should be a complete path. Applying simplify-path before path->main-collects-relative is usually a good idea. For historical reasons, path can be a byte string, which is converted to a path using bytes->path. See also collects-relative->path. 6.9.3 Displaying Paths Relative to a Common Root If cache is not #f, it is used as a cache argument for pkg->path to speed up detection and conversion of package paths. If the path is not absolute, or if it is not in any of these, it is returned as-is (converted to a string if needed). If default is given, it specifies the return value instead: it can be a procedure that is applied onto the path to get the result, or the result itself. Note that this function can return a non-string only if default is given and it does not return a string. The dirs argument determines the prefix substitutions. It must be an association list mapping a path-producing thunk to a prefix string for paths in the specified path. default determines the default for the resulting function (which can always be overridden by an additional argument to this function).
http://docs.racket-lang.org/raco/relative-paths.html
2014-08-20T12:40:34
CC-MAIN-2014-35
1408500808153.1
[]
docs.racket-lang.org
Integrator's Guide Document Purpose The product called eSignLive™ provides a complete e-signature platform for the Web, including preparing, distributing, reviewing, signing, and downloading documents. This guide tells developers how to integrate their Web application with eSignLive™, using a is divided into the following chapters: - Getting Started — An overview of concepts and procedures relevant to getting started with your integration - System Requirements — Summarizes the software requirements for integration - SDKs — A description of our SDKs (Software Development Kits), which are built on our REST API - REST API — A description of our REST API, which is the entry point for all external interactions with our product - Event Notifier — A description of the Event Notifier, a component that can notify integrators when certain steps of the integration have been accomplished. NOTE: SDKs and the REST API are alternative ways of integrating your Web application. TIP: For additional features that may prove useful when you're building your solution, the Feature Guides provide great examples for getting the most out of your integration.
http://docs.esignlive.com/content/c_integrator_s_guide/introduction/integrator_s_guide.htm
2017-06-22T22:08:08
CC-MAIN-2017-26
1498128319912.4
[]
docs.esignlive.com
THIS TOPIC APPLIES TO: SQL Server (starting with 2016). Possible values are SQL Server 2014 (120), SQL Server 2012 (110), and SQL Server 2008 (100). When a SQL Server 2005 database is upgraded to SQL Server 2014, the compatibility level for that database is changed from 90 to 100. The 90 compatibility level is not supported in SQL Server 2014. For more information, see ALTER DATABASE Compatibility Level (Transact-SQL). Containment type Specify none or partial to designate if this is a contained database..). Auto Create Statistics Specify whether the database automatically creates missing optimization statistics. Possible values are True and False. When True, any missing statistics needed by a query for optimization are automatically built during optimization. For more information, see CREATE STATISTICS (Transact-SQL). Auto Shrink Specify whether the database files are available for periodic shrinking. Possible values are True and False. For more information, see Shrink a Database. Auto Update Statistics Specify whether the database automatically updates out-of-date optimization statistics. Possible values are True and False. When True, any out-of-date statistics needed by a query for optimization are automatically built during optimization. For more information, see CREATE STATISTICS (Transact-SQL).. Containment In a contained databases, some settings usually configured at the server level can be configured at the database level. Default Fulltext Language LCID Specifies a default language for full-text indexed columns... Cursor Close Cursor on Commit Enabled Specify whether cursors close after the transaction opening the cursor has committed. Possible values are True and False. When True, any cursors that are open when a transaction is committed or rolled back are closed. When False, such cursors remain open when a transaction is committed. When False, rolling back a transaction closes any cursors except those defined as INSENSITIVE or STATIC. For more information, see SET CURSOR_CLOSE_ON_COMMIT (Transact-SQL). Default Cursor Specify default cursor behavior. When True, cursor declarations default to LOCAL. When False, Transact-SQL cursors default to GLOBAL. Database Scoped Configurations. This is equivalent to Trace Flag 9481. Legacy Cardinality Estimation for Secondary Specify the query optimizer cardinality estimation model for secondaries, if any, independent of the compatibility level of the database. This is equivalent to Trace Flag 9481... ANSI NULL Default Allow null values for all user-defined data types or columns that are not explicitly defined as NOT NULL during a CREATE TABLE or ALTER TABLE statement (the default state). For more information, see SET ANSI_NULL_DFLT_ON (Transact-SQL) and SET ANSI_NULL_DFLT_OFF (Transact-SQL). ANSI NULLS Enabled Specify the behavior of the Equals ( =) and Not Equal To ( <>) comparison operators when used with null values. Possible values are True (on) and False (off). When True, all comparisons to a null value evaluate to UNKNOWN. When False, comparisons of non-UNICODE values to a null value evaluate to True if both values are NULL. For more information, see SET ANSI_NULLS (Transact-SQL). ANSI Padding Enabled Specify whether ANSI padding is on or off. Permissible values are True (on) and False (off). For more information, see SET ANSI_PADDING (Transact-SQL). ANSI Warnings Enabled Specify ISO standard behavior for several error conditions. When True, a warning message is generated if null values appear in aggregate functions (such as SUM, AVG, MAX, MIN, STDEV, STDEVP, VAR, VARP, or COUNT). When False, no warning is issued. For more information, see SET ANSI_WARNINGS (Transact-SQL). Arithmetic Abort Enabled Specify whether the database option for arithmetic abort is enabled or not. Possible values are True and False. When True, an overflow or divide-by-zero error causes the query or batch to terminate. If the error occurs in a transaction, the transaction is rolled back. When False, a warning message is displayed, but the query, batch, or transaction continues as if no error occurred. For more information, see SET ARITHABORT (Transact-SQL). Concatenate Null Yields Null Specify the behavior when null values are concatenated. When the property value is True, string + NULL returns NULL. When False, the result is string. For more information, see SET CONCAT_NULL_YIELDS_NULL (Transact-SQL). Cross-database Ownership Chaining Enabled This read-only value indicates if cross-database ownership chaining has been enabled. When True, the database can be the source or target of a cross-database ownership chain. Use the ALTER DATABASE statement to set this property. Date Correlation Optimization Enabled When True, SQL Server maintains correlation statistics between any two tables in the database that are linked by a FOREIGN KEY constraint and have datetime columns. When False, correlation statistics are not maintained. Numeric Round-Abort Specify how the database handles rounding errors. Possible values are True and False. When True, an error is generated when loss of precision occurs in an expression. When False, losses of precision do not generate error messages, and the result is rounded to the precision of the column or variable storing the result. For more information, see SET NUMERIC_ROUNDABORT (Transact-SQL). Parameterization When SIMPLE, queries are parameterized based on the default behavior of the database. When FORCED, SQL Server parameterizes all queries in the database. Quoted Identifiers Enabled Specify whether SQL Server keywords can be used as identifiers (an object or variable name) if enclosed in quotation marks. Possible values are True and False. For more information, see SET QUOTED_IDENTIFIER (Transact-SQL). Recursive Triggers Enabled Specify whether triggers can be fired by other triggers. Possible values are True and False. When set to True, this enables recursive firing of triggers. When set to False, only direct recursion is prevented. To disable indirect recursion, set the nested triggers server option to 0 using sp_configure. For more information, see Create Nested Triggers. Trustworthy When displaying True, this read-only option indicates that SQL Server allows access to resources outside the database under an impersonation context established within the database. Impersonation contexts can be established within the database using the EXECUTE AS user statement or the EXECUTE AS clause on database modules. To have access, the owner of the database also needs to have the AUTHENTICATE SERVER permission at the server level. This property also allows the creation and execution of unsafe and external access assemblies within the database. In addition to setting this property to True, the owner of the database must have the EXTERNAL ACCESS ASSEMBLY or UNSAFE ASSEMBLY permission at the server level. By default, all user databases and all system databases (with the exception of MSDB) have this property set to False. The value cannot be changed for the model and tempdb databases. TRUSTWORTHY is set to False whenever a database is attached to the server. The recommended approach for accessing resources outside the database under an impersonation context is to use certificates and signatures as apposed to the Trustworthy option. To set this property, use the ALTER DATABASE statement. VarDecimal Storage Format Enabled This option is read-only starting with SQL Server 2008. When True, this database is enabled for the vardecimal storage format. Vardecimal storage format cannot be disabled while any tables in the database are using it. In SQL Server 2008).. For more information about Database State, see Database States.. For more information, see Transparent Data Encryption (TDE). See Also ALTER DATABASE (Transact-SQL) CREATE DATABASE (SQL Server Transact-SQL)
https://docs.microsoft.com/en-us/sql/relational-databases/databases/database-properties-options-page
2017-06-22T22:38:24
CC-MAIN-2017-26
1498128319912.4
[]
docs.microsoft.com
Stable File System In all current file systems, the clusters that make up a file are represented in some centralized structure (the File Table for FAT or a B+Tree for most other systems). In this process, all clusters are linked via a double linked list. This means that when a cluster is allocated to a given file, it has appended 2 64 bit entries. If there is more than one cluster in a file, then the 64 bit entries represent the previous and next cluster in that file. Publication Date05 March 2011 Tagsfile.
http://docs.defensivepublications.org/publications/stable-file-system
2017-06-22T22:25:31
CC-MAIN-2017-26
1498128319912.4
[array(['/images/spacer.gif', None], dtype=object)]
docs.defensivepublications.org
Learn how to Streamline your Payroll Processing Procedures for Maximum Efficiency 2017 Start Date : July 6, 2017 End Date : July 7, 2017 Time : 8:30 am to4:30 pm Phone : 800-447-9407 Description you should Opportunity to network with other payroll professionals and benefit from their experiences and questions You will walk away with a guide manual to help you apply the principles of payroll administration back in the – Organized by globalcompliancepanel NetZealous DBA as GlobalCompliancePanel , Online Event , NetZealous LLC-globalcompliancepanel, 161 Mission Falls Lane, Suite 216, Fremont,CA 94539, USA Tel: Event Manager Mobile: 800-447-9407 Website: Event Categories: Cardiology, Emergency Medicine, Geriatrics, Hematology, Pain Management, Physical Medicine, Plastic Surgery, and Rheumatology.
http://meetings4docs.com/event/learn-how-to-streamline-your-payroll-processing-procedures-for-maximum-efficiency-2017/
2017-06-22T22:30:05
CC-MAIN-2017-26
1498128319912.4
[]
meetings4docs.com
constructed up-stair. So chose the space I highlight below, and then follow along! Find the spot in the picture above, just below those down stairs. - Hit b - Hit C (Shift-c) -, dwarves dwarves’ time, which is a bit pointless. 5.2. Hotel Califortress!¶ dwarves that need hard work to keep happy! Fortunately we’ve dug down a few levels and we have a lot of nice rock down there. So lets go take some time to lay out some great bedrooms for our dwarves.. dwarves gems too, cool!) lets continue with- 5.3. Dwarves and their strange moods!¶ Oh dear! Something is going down in dwarf land! Endok Oltarisos, Tanner, withdraws from society… cap! .!
https://df-walkthrough.readthedocs.io/en/latest/chapters/chap05-industry.html
2018-11-13T03:08:18
CC-MAIN-2018-47
1542039741192.34
[array(['../_images/dftutorial79.png', '../_images/dftutorial79.png'], dtype=object) array(['../_images/dftutorial80.png', '../_images/dftutorial80.png'], dtype=object) array(['../_images/dftutorial81.png', '../_images/dftutorial81.png'], dtype=object) array(['../_images/dftutorial82.png', '../_images/dftutorial82.png'], dtype=object) array(['../_images/05-storage.png', '../_images/05-storage.png'], dtype=object) array(['../_images/05-meeting1.png', '../_images/05-meeting1.png'], dtype=object) array(['../_images/05-meeting2.png', '../_images/05-meeting2.png'], dtype=object) array(['../_images/05-meeting3.png', '../_images/05-meeting3.png'], dtype=object) array(['../_images/05-meeting4.png', '../_images/05-meeting4.png'], dtype=object) array(['../_images/05-all-stockpiles.png', '../_images/05-all-stockpiles.png'], dtype=object) array(['../_images/05-big-bedrooms.png', '../_images/05-big-bedrooms.png'], dtype=object) array(['../_images/dftutorial92.png', '../_images/dftutorial92.png'], dtype=object) array(['../_images/dftutorial95.png', '../_images/dftutorial95.png'], dtype=object) ]
df-walkthrough.readthedocs.io
ToolBarTray Structure - ToolBarTray - The horizontal tray where toolbars are placed. - Band - In the example above there are two bands: Band 0 contains two toolbars and Band 1 - one. - BandIndex - The position of a Toolbar inside the selected Band. Controls / RadToolBar /.
https://docs.telerik.com/devtools/silverlight/controls/radtoolbar/features/radtoolbartray-structure
2018-11-13T03:25:56
CC-MAIN-2018-47
1542039741192.34
[array(['images/toolbar5.png', None], dtype=object)]
docs.telerik.com
Preferences Dialog Box The Preferences dialog box lets you adjust preferences to suit your work style, allowing you to work more efficiently. NOTE To learn more about the individual preferences, refer to the Preferences guide —see About Harmony Preferences. NOTE Some preferences require you to exit and restart the application, or close a view and reopen it. Do one of the following: - Select Edit > Preferences (Windows/Linux) or Harmony Essentials > Preferences (macOS). - Press Ctrl + U (Windows/Linux) or ⌘ + U (macOS).
https://docs.toonboom.com/help/harmony-16/essentials/reference/dialog-box/preferences-dialog-box.html
2018-11-13T02:35:52
CC-MAIN-2018-47
1542039741192.34
[array(['../../Resources/Images/HAR/Preferences/HAR11/Sketch/HAR11_sketch_shortcuts.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
- ! Establishing a Secure Connection to the Server Farm The following example shows how Citrix Gateway deployed in the DMZ works with the Web Interface to provide a secure, single point-of-access to published resources available in a secure enterprise network. In this example, all of the following conditions exist: - User devices from the Internet connect to Citrix Gateway by using Citrix Receiver. - The Web Interface resides behind Citrix Gateway in the secure network. The user device makes the initial connection to Citrix Citrix Gateway. Note that the IP address of the server running the requested resource is never revealed to users. - The ICA file contains data instructing the web browser to start Citrix Receiver..
https://docs.citrix.com/en-us/netscaler-gateway/12-1/integrate-web-interface-apps/ng-wi-integrate-apps-secure-connection.html
2018-11-13T03:52:19
CC-MAIN-2018-47
1542039741192.34
[]
docs.citrix.com
delegate ( |Exception { /** * @param ContainerInterface $container * @param string $name * @param callable $callback * @return ErrorHandler */ public function __invoke(ContainerInterface $container, $name, callable $callback) { ($request, DelegateInterface $delegate) use ($renderer) { try { $response = $delegate->process('););
https://zend-expressive.readthedocs.io/en/latest/v2/features/error-handling/
2018-11-13T02:45:34
CC-MAIN-2018-47
1542039741192.34
[]
zend-expressive.readthedocs.io
This is the quickest and most effective way to decrease the load on your supervisor. If you only change one thing on your farm from the defaults, make this change. Default behavior Writing logs The default is to store the logs on the supervisor's local disk. Job log information is handled with remote log transmission that follows the following process: - The job logs are first stored locally on the Worker - Then transmitted from the Worker to the Supervisor - Then finally written locally on the Supervisor's filesystem. Reading logs When logs are stored on the supervisor, remote log transmission handling follows this protocol: - The client asks the supervisor for the logs for a particular job - A supervisor process: - reads the log data for that job into memory - converts it to a serialized object - sends that object across a network socket to the client However, the most efficient way is for both the Supervisor and the Worker to share the job log files directly on a common file server mounted by both the Supervisor host and the Worker hosts. In either case, the Supervisor will need to have access to the entire job log directory structure. Similarly the Client should read the job log files direct from disk as well instead of having the Supervisor transmit the files to it. On the Supervisor, job logs will be located in <supervisor_logpath>/job. On the Worker, job logs will be located in the <worker_logpath>/job. Both these directories should point to the same location on a shared filesytem. Steps to Set the Job Log Directory For the supervisor: Set the Supervisor job log directory to control where the supervisor writes the job logs by modifying the supervisor_logpath entry in the supervisor's qb.conf: supervisor_logpath = <shared directory> then restart the supervisor service for the change to take effect. For the workers: Set the Worker job log directory to control where the supervisor writes the job logs by modiying both the worker_logpath and worker_logmode entries in either the qbwrk.conf (recommended) or each worker's qb.conf: worker_logmode = mounted worker_logpath = <shared directory> If you make the changes in the qbwrk.conf on the supervisor, push the changes out with " qbadmin w --reconfigure". (See: Centralized Worker Configuration). If you edited each worker's qb.conf, you will need to restart the worker service for the change to take effect. For the clients: Set the Client job log directory. Modify the client_logpath entry in each client machine's qb.conf so the client machines will directly access the job log files from disk instead of going through the Supervisor: client_logpath = <shared directory> To test: - Submit a new job that is very simple, perhaps one that only runs the "set" command. You just want a job that starts, prints a few lines, and exits. - Verify the job log directory is being created in the expected location - If not, the supervisor is not set correctly. Verify and correct, restart the supervisor service, and re-submit to test. - Verify that the job log directory contains at minimun a .qja and .xja file. These are written by the supervisor. - Once the job is complete, verify that the job log directory contains at least a .out file (there will be 1 per job instance). There should probably also be a .err file. These are written by the worker. - If no .out or .err files exist, or the .out does not contain anything that looks like it came from the job itself, then the workers' logmode and logpath are not set correctly. Verify and correct, the re-submit to test.
http://docs.pipelinefx.com/display/QUBE/Writing+Job+Logs+to+a+Network+Filesystem?src=search
2021-04-11T00:51:07
CC-MAIN-2021-17
1618038060603.10
[]
docs.pipelinefx.com
. Find the Lookups manager - In the Splunk bar, click Settings. - In the Knowledge section, click Lookups. -. Upload the lookup table file To use a lookup table file, you must upload the file to your Splunk deployment. - In the Lookups manager, locate Lookup table files. - In the Actions column click Add new. - You use the Add new lookup table files view to upload CSV files that you want to use. -. Note:. Now that the lookup table file is uploaded, you need tell the Splunk software which applications can use this file. You can share the lookup table file with the Search app or with all of the apps. - create a lookup definition from the lookup table file.. Click Save. -.!
https://docs.splunk.com/Documentation/Splunk/6.4.7/SearchTutorial/Usefieldlookups
2021-04-11T02:20:48
CC-MAIN-2021-17
1618038060603.10
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
. Verifying File Locking¶ The Space Storage Volume must provide reliable file locking. This is not always the case with certain network mounted (NFS) volumes, which should be verified before usage. TDLogTest is a tool which simulates the concurrent access and locking patterns generated by multiple TeamDrive Clients. This tool can be used to test whether file locking support is compatible with the TeamDrive Hosting Service. Note The test cannot confirm with 100% certainty, whether an NFS volume is compatible with TeamDrive. However, failure of the test indicates that a volume is unfit to servie as /spacedata on a Host Server. The following is a step-by-step guide to running TDLogTest: - Download the package from: and copy it to the Host Server machine. - Create a test directory on the Space Volume, for example:mkdir /spacedata/vol01/TDLogTest - Enter this directory and extract the content of the tar archive, for example:tar zxvf ~/TDLogTest-1485.tar.gz 4. Edit TDLogTest.cfg, set the path in TDLOGS to the directory to be used for testing. - Initialize the test directory by running:./initTDLogTest - Start the test by running:./startTDLogTest The script spawns a (definable) number of reader and writer background processes which log their progress to STDOUT. Errors will be logged to TDLogTest.err by default. To stop the test, call ./stopTDLogTest. Keep the test running for a while. Try using different values for readers and writers as well, by stopping the test and passing different options to startTDLogTest. Also try creating multiple test directories and spawning more readers/writers using a different location. If there are multiple Host Server instances connected to the same NFS volume then the test must be performed from multiple instances simultaneously, after the initial test with one instance succeeded..3.8 (Aug 30 2016 11:57:45) loaded [notice] Logging (=error) to: /var/log/mod_yvva.log [notice] Apache/2.2.31 (Unix) mod_ssl/2.2.31 OpenSSL/1.0.1k-fips configured -- resuming normal operations [notice] mod_pspace 1.7.10 Loaded; Build Nov 17 2016 16:55:00;.
https://docs.teamdrive.net/HostServer/3.6.2/html/TeamDrive-Host-Server-Installation-en/Pre-Installation_Tasks.html
2021-04-11T00:55:59
CC-MAIN-2021-17
1618038060603.10
[]
docs.teamdrive.net
The AssetMarker-Plugin can be used to mark assets with bullets according to predefined criteria. To be configured in {home}/appserver/conf/custom.properties type: String, required: yes, default: - License key (delivered by brix IT Solutions) type: list of string, required: yes, default: noAssetType,invalidInformationFields,unlinked,unreleased,availability,locked,duplicate,conversionStatus This property defines which bullets are displayed on the assets. The order of the bullets in the property also matches the actual order of the bullets on the assets. To add a bullet, define a name and list it with the other bullets. For example: guiPlugin.bullets.asset=noAssetType,invalidInformationFields,unlinked,unreleased,availability,locked,duplicate,conversionStatus,myBullet. Here myBullet would be added. The standard CELUM bullets must also be listed in the property, otherwise they'll be missing afterwards. The name of the bullet is used for its remaining configuration, therefore {bulletName} is to be replaced by the individual name of each instance. type: string, required: yes, default: - This property specifies the path to the bullet icon. Each Bullet needs an icon with size 16 × 16 px, the actual content should be 12 × 12px, background transparent. type: string, required: yes, default: - This property sets the mouseover-title on the bullet. For multilanguage-support, add your messages to appserver/lang/customMessages_xy.properties and use the key here. Example: assetMarker.{bulletName}.title=assetMarker.myTitle appserver/lang/customMessages_de.propertiescontains assetMarker.myTitle=Mein Titel appserver/lang/customMessages_en.propertiescontains assetMarker.myTitle=My Title type: list of long, required: no, default: - If this property is set, only assets that have one of the configured Assettypes will have the bullet. type: list of long, required: no, default: - If this property is set, only assets that are linked to one of the configured nodes (or one of its children) will have the bullet. type: list of long, required: no, default: - If this property is set, only assets that are linked to one of the configured Nodetypes will have the bullet. type: string, required: no, default: - This property can be set for every Informationfield. If this property is configured for an Informationfield only assets that have the configured value in this fields will have the bullet. The configuration of this property depends on the Type of the Informationfield as shown below. Furthermore, the value can be set to 'isEmpty' and 'notEmpty' for all types. For better performance, we recommend that you add the property customfields.list.asset.required.field.idswith a list of all the information fields that you configure in the AssetMarker (e.g. customfields.list.asset.required.field.ids=108,118,138). If you don't explicitly tell CELUM to pre-load these information field values in the list response, the AssetMarker will have to reload the asset with those information field values, which can be slow. The value xy is a daycount. If the value is set to -1 it means infinite days from today. (e.g. next -1 means everyday in the future) x and y are numbers (resp. doubles). Ranges with '()' / '[]' / '(]' are also allowed. type: set of long, required: no, default: - This property takes a comma separated list of informationfield-Ids that should not be empty. type: boolean, required: no, default: false If this property is set to true, the bullet is only shown if the asset is not linked to any other node than the ones configured in the porperty assetMarker.{bulletName}.nodes type: list of string, required: no, default: - This property takes a comma separated whitelist of file-extensions that the assets should have. type: boolean, required: no, default: false Inverts the assetMarker.{bulletName}.nodes property if set to true. type: boolean, required: no, default: false Inverts the assetMarker.{bulletName}.asssetTypes property if set to true. type: boolean, required: no, default: false Inverts the assetMarker.{bulletName}.nodeTypes property if set to true. type: string, required: no, default: - This property allows to use a custom bean to define the bullet rules. List of available custom beans: For the assetMarkerTextIcon the following properties need to be set: type: string, required: no, default: - This property defines the category of the text-icon value. Possible keys: infofield, fileinfo, property type: string, required: no, default: - This property defines the text-icon value. Possible values depending on key: idof the information field (long) type: int, required: no, default: -, since 1.3.13 The maximum length of the text that is displayed. guiPlugin.bullets.asset=noAssetType,invalidInformationFields,unlinked,unreleased,availability,locked,duplicate,conversionStatus,groupBullet,progressBullet assetMarker.groupBullet.imagePath=../images/bullets/group.png assetMarker.groupBullet.title=maintab.grouptitle assetMarker.groupBullet.nodeTypeIds=106 assetMarker.progressBullet.imagePath=../images/bullets/asset_in_progress_16_blue.png assetMarker.progressBullet.title=infofield.approval.pending assetMarker.progressBullet.infofield.165=2 Released 2018-01-28 Initial Version Released 2018-06-12 Add isDateControlled marker Released 2018-08-09 Add the nodeExclusive property Released 2018-08-27 Add texticon marker Released 2018-11-08 Add number and double field logic Released 2020-02-18 - CELUM 6.4 compatible © brix IT Solutions
https://docs.brix.ch/celum_extensions/asset_marker
2021-04-11T01:37:27
CC-MAIN-2021-17
1618038060603.10
[]
docs.brix.ch
A newer version of this page is available. Switch to the current version. SqlGeometryItem Class This class is used to represent the SQL Geometry spatial data objects. Namespace: DevExpress.XtraMap Assembly: DevExpress.XtraMap.v18.2.dll Declaration public class SqlGeometryItem : IOwnedElement Public Class SqlGeometryItem Implements IOwnedElement Related API Members The following members accept/return SqlGeometryItem objects: See Also Feedback
https://docs.devexpress.com/WindowsForms/DevExpress.XtraMap.SqlGeometryItem?v=18.2
2021-04-11T00:09:31
CC-MAIN-2021-17
1618038060603.10
[]
docs.devexpress.com
If, for some reason, your computer doesn't meet the requirements, hardware (no GPU) or software (no CUDA or nvidia-docker), there is a quick way to try training & inference with Supervisely on Amazon EC2. If you have an account on EC2, deploying Supervisely agent is easy as one-two-three: Sign in into your account. We suppose you already have an account on AWS. If not, signup. Select EC2, open "Instances" section and click "Launch Instance" button. Search for "Deep Learning AMI". You will see a bunch of out-of-the-box images that have Docker and CUDA installed - exactly what we are looking for. We suggest to use "Deep Learning AMI (Amazon Linux 2)". Click "Select" button. On a next step select "GPU compute" filter and select "p3.*" instance type. We suggest using "p3.2xlarge". Different AMIs need different storage — i.e. "Deep Learning AMI (Ubuntu)" comes with Anaconda and multiple versions of CUDA so it's 100 Gb of already taken space. We suggest to configure at least 200 Gb volume size, because agent will download pretty large docker images. You can also attach additional EBS volume and create a symlink to ~/.supervisely-agent - this is where your model weights and images will be stored. Click "Review and Launch" to start your instance..
https://docs.supervise.ly/customization/agents/ami
2021-04-11T01:31:26
CC-MAIN-2021-17
1618038060603.10
[]
docs.supervise.ly
Nowadays is essential to have any security scheme in our applications. In WorkWithPlus we have a mechanism to easily add security to the application according to your needs. The property of Security node is to centralize security aspects of the application. It has four modes: None, Simple Security, Advanced Security and GAM. It is highly recommended to apply GAM Security because this is the standard security solution for GeneXus as it automatically manages Users, Roles, and the functionalities associated to objects, actions, links, and tabs. GAM also can be used for Mobile applications and to protect the web services of the application. This security will only check when user browses through every part of application if it has access to it (the pattern will invoke this for every object generated, but developer should add it to webpanels, reports, and other objects that are not generated by pattern). In order to do this, WorkWithPlus allow user to specify the procedure which will be invoked to check security and the webpanel which will be called when user is not allowed to be in the object he is browsing. For instance, if application users have some restrinctions depending on its role, we could set this on this procedure. Security node has the following properties: Specifies the procedure that will be invoked in order to check if application user is allowed to browse through that part of the application, or execute some action. By default this procedure is 'IsAuthorized', but the behaviour inside it should be developed by GeneXus developer. It could receive extra parameters (by adding it as a child from Security node and specifying its type) and could get values from context for example. The parameters that always receive are: &Pgmname and &IsAuthorized (in that order). Web panel that will be invoked when some application user is not allowed to be in the website he is browsing. By default this webpanel is 'NotAuthorized'. This security is explained in detail in the following link: Advanced Security This security is explained in detail in the following link: GAM Security
https://docs.workwithplus.com/servlet/com.wiki.wiki?1060,Security,
2021-04-11T02:02:05
CC-MAIN-2021-17
1618038060603.10
[]
docs.workwithplus.com
The Stripe source supports both Full Refresh and Incremental syncs. You can choose if this connector will copy only the new or updated data, or all rows in the tables and columns you set up for replication, every time a sync is run. This Stripe source is based on the Singer Stripe Tap. Several output streams are available from this source (customers, charges, invoices, subscriptions, etc.) For a comprehensive output schema look at the Singer tap schema files. The Stripe API uses the same JSONSchema types that Airbyte uses internally ( string, date-time, object, array, boolean, integer, and number), so no type conversions happen as part of this source. The Stripe connector should not run into Stripe API limitations under normal usage. Please create an issue if you see any rate limit issues that are not automatically retried successfully. Stripe Account Stripe API Secret Key Visit the Stripe API Keys page in the Stripe dashboard to access the secret key for your account. Secret keys for the live Stripe environment will be prefixed with sk_live_or rk_live. We recommend creating a restricted key specifically for Airbyte access. This will allow you to control which resources Airbyte should be able to access. For ease of use, we recommend using read permissions for all resources and configuring which resource to replicate in the Airbyte UI. If you would like to test Airbyte using test data on Stripe, sk_test_ and rk_test_ API keys are also supported.
https://docs.airbyte.io/integrations/sources/stripe
2021-04-11T01:04:20
CC-MAIN-2021-17
1618038060603.10
[]
docs.airbyte.io
Every annotation team needs to organize its work, but no one wants to stop working in order to track work. Supervisely Issues allow keeping the work of a huge labeling workforce all in one place. Data Labeling is about collaboration at scale: managers, domain experts, data scientists, inhouse labelers and dedicated external labeling teams. Hundreds of people are involved in. Also, this complicated process requires multi-stage reviewing and correction to guarantee quality. And it is hard to organize the entire process without specially designed tools. Supervisely Issues are built in cooperation with professional labeling teams and help to: create, organize and inspect issues on invalid images and objects. share, discuss, resolve them with your labeling team track progress and receive notifications in real-time collaborate right in annotation tool without switching interfaces back and forth Github Issues are the perfect example of efficient collaboration between thousands of developers. We at Supervisely got best practices from Github Issues, adopted them to labeling scenarios and integrated into annotation tools by keeping well-known user interfaces. Issues has its own section in every team. There are two types of issues: private and public ones. Private issues help to discuss project-related questions and are available only for team members. Public issues are designed to keep everybody in your organization on the same page, share ideas, hear all voices, consider all options — and establish consensus. A big part of managing issues is focusing on important tasks and keep plans up to date simultaneously all in one place. The search box at the top of the page gets you there faster. You can filter search results by type (public/private), author, assignee, project, job, and open/close state. Sorting is also available, for example by most commented or recently updated. A typical issue looks like this: Issue page Title and description describe what the issue is all about Links associate your issues with any Supervisely entities like projects, labeling jobs, neural networks and so on and help you categorize and filter your issues based on that Assignees are responsible for working on the issue Comments allow anyone with access to the Issues to provide feedback Markdown support — it is a lightweight and easy-to-use syntax for styling all your comments: add tables, images, coloring, lists, links and so on Once a lot of issues are collected, it is hard to find the ones you care about. Assignees and Links sections are used to categorize and filter issues. To assign users just click the corresponding gear in the sidebar on the right. To attach links, just copy and paste URLs into the comment text block, they will be automatically parsed. Integration into labeling interfaces is an essential part of the effective labeling process: place an issue to specific objects, images or some image regions see the entire context of a conversation organize real-time discussion of edge labeling cases get instant feedback and guides right from comments filter images with issues on the right panel change issue positions or move discussion popup so that their placement makes sense resolve issues from labeling interface, cause no one wants to stop working in order to track work You can close the entire issue or you can create a single issue and attach a lof of items (for example images or projects) to this issue. They will be in the “Items to resolve” page. Every item has its own discussion thread. Supervisely’s issue tracking is special because of our focus on collaboration and visualization tools during labeling. Every item can be discussed and resolved separately. For images and objects, a user sees initial labels (when the issue was created) and a final result. It is especially useful for tracking how labeling mistakes are fixed over time. Managers and reviewers can easily track changes by comparing the differences between initial and final labels. Issues evolve over time: titles change, items resolve, new items appear, discussions become long, issues get new assignees. Changing history in conversation gives better insight into these changes. Notifications provide important updates about the conversations and activities you’re involved in. Users don’t miss relevant information and stay informed of what’s going on. Notifications are available through email or right in Supervisely dashboard.
https://docs.supervise.ly/labeling/issues
2021-04-11T00:55:50
CC-MAIN-2021-17
1618038060603.10
[]
docs.supervise.ly
You should always supplement the context diagram with a table. This way you can reduce the amount of labels in the diagram and easily add explanations, rationale or cross-references. The comprehensive context diagram above requires such a tabular explanation. We only show excerpts of the corresponding table, but you should consider some characteristics of this typical “graphics/table” pair: - You should use short identifiers within the diagram in terms of abstractions or generic terms. We use “Services” or “Market Data” in the example, which are then described in more detail in the table - Reference within the table to more detailed explanations, e.g. if you are using terms from your domain language, you don’t need to explain them in the context, but refer to the relevant chapter (probably 8.1, domain modell). See the following example:
https://docs.arc42.org/tips/3-3/
2022-08-07T23:18:35
CC-MAIN-2022-33
1659882570730.59
[array(['/images/03-context-user-product-service.png', None], dtype=object)]
docs.arc42.org
xdmp:json-pointer( node as node(), path as xs:string ) as node()? Resolve a (relative) JSON pointer (see RFC 6901) in the context of a node and return the result, if any. xdmp:json-pointer( object-node{ "properties": object-node{ "count": object-node{ "type":"integer", "minimum":0 }, "items": object-node{ "type":"array", "items": object-node{"type":"string", "minLength":1 } } } },"/properties/count") => object-node{ "type":"integer", "minimum":0 } Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/9.0/xdmp:json-pointer
2022-08-07T22:04:30
CC-MAIN-2022-33
1659882570730.59
[]
docs.marklogic.com
AccessPlan.prototype.col( colName as String ) as columnIdentifier. col is a method of the AccessPlan class. //Inner.
https://docs.marklogic.com/AccessPlan.prototype.col
2022-08-07T22:21:13
CC-MAIN-2022-33
1659882570730.59
[]
docs.marklogic.com
Local Identifiability Tools StructuralIdentifiability.differentiate_solution— Function differentiate_solution(ode, params, ic, inputs, prec) Input: - the same as for power_series_solutions Output: - a tuple consisting of the power series solution and a dictionary of the form (u, v) => power series, where uis a state variable vis a state or parameter, and the power series is the partial derivative of the function uw.r.t. vevaluated at the solution StructuralIdentifiability.differentiate_output— Function differentiate_output(ode, params, ic, inputs, prec) Similar to differentiate_solution but computes partial derivatives of a prescribed outputs returns a dictionary of the form y_function => Dict(var => dy/dvar) where dy/dvar is the derivative of y_function with respect to var.
https://docs.sciml.ai/dev/modules/StructuralIdentifiability/utils/local_identifiability/
2022-08-07T21:28:12
CC-MAIN-2022-33
1659882570730.59
[]
docs.sciml.ai
Understanding Evaluation Mode Overview Arigato Automation offers a unique trial that allows merchants to explore all features of the Unlimited version of app without authorizing any charges to your Shopify account. All Evaluations start on an Unlimited plan. Unlimited plan features are identified with the star icon ( ★). If you choose to downgrade to a Lite plan, the app will warn you about what features will no longer be available after downgrading. You may downgrade to a Lite plan at any time during your evaluation (or after). Lite Plan Features - Create unlimited workflows - All action types - All conditions - Custom Actions - Custom Conditions - Token browser - Limited number of emails or SMS messages - *Schedules & Bulk Operations are NOT included in Lite plans Unlimited Plan Features Access to all Lite plan features, plus - Schedules - Bulk Operations - Higher limits on emails and SMS messages
https://arigato.docs.bonify.io/article/370-evaluation-mode
2022-08-07T22:00:58
CC-MAIN-2022-33
1659882570730.59
[]
arigato.docs.bonify.io
DeleteRoute Deletes the specified route from the specified route table. Request Parameters The following parameters are for this specific action. For more information about required and optional parameters that are common to all actions, see Common Query Parameters. - DestinationCidrBlock The IPv4 CIDR range for the route. The value you specify must match the CIDR for the route exactly. Type: String Required: No - DestinationIpv6CidrBlock The IPv6 CIDR range for the route. The value you specify must match the CIDR for the route exactly. Type: String Required: No - DestinationPrefixListId The ID of the prefix list for the route. Type: String Required: No - deletes the route with destination IPv4 CIDR 172.16.1.0/24 from the specified route table. Sample Request &RouteTableId=rtb-1122334455667788a &DestinationCidrBlock=172.16.1.0/24 &AUTHPARAMS Sample Response <DeleteRouteResponse xmlns=""> <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId> <return>true</return> </DeleteRouteResponse> Example 2 This example deletes the route with destination IPv6 CIDR ::/0 from the specified route table. Sample Request &RouteTableId=rtb-1122334455667788a &DestinationIpv6CidrBlock=::/0 &AUTHPARAMS See Also For more information about using this API in one of the language-specific Amazon SDKs, see the following:
https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DeleteRoute.html
2022-08-07T22:39:26
CC-MAIN-2022-33
1659882570730.59
[]
docs.amazonaws.cn
Contributing to Ansible-maintained Collections The Ansible team welcomes community contributions to the collections maintained by Red Hat Ansible Engineering. This section describes how you can open issues and create PRs with the required testing before your PR can be merged. Ansible-maintained collections The following table shows: Ansible-maintained collection - Click the link to the collection on Galaxy, then click the repobutton in Galaxy to find the GitHub repository for this collection. Related community collection - Collection that holds community-created content (modules, roles, and so on) that may also be of interest to a user of the Ansible-maintained collection. You can, for example, add new modules to the community collection as a technical preview before the content is moved to the Ansible-maintained collection. Sponsor - Working group that manages the collections. You can join the meetings to discuss important proposed changes and enhancements to the collections. Test requirements - Testing required for any new or changed content for the Ansible-maintained collection. Developer details - Describes whether the Ansible-maintained collection accepts direct community issues and PRs for existing collection content, as well as more specific developer guidelines based on the collection type. Note * A ✓ under Open to PRs means the collection welcomes GitHub issues and PRs for any changes to existing collection content (plugins, roles, and so on). ** Integration tests are required and unit tests are welcomed but not required for the AWS collections. An exception to this is made in cases where integration tests are logistically not feasible due to external requirements. An example of this is AWS Direct Connect, as this service can not be functionally tested without the establishment of network peering connections. Unit tests are therefore required for modules that interact with AWS Direct Connect. Exceptions to amazon.aws must be approved by Red Hat, and exceptions to community.aws must be approved by the AWS community. *** ansible.netcommon contains all foundational components for enabling many network and security platform collections. It contains all connection and filter plugins required, and installs as a dependency when you install the platform collection. **** Unit tests for Windows PowerShell modules are an exception to testing, but unit tests are valid and required for the remainder of the collection, including Ansible-side plugins. Deciding where your contribution belongs We welcome contributions to Ansible-maintained collections. Because these collections are part of a downstream supported Red Hat product, the criteria for contribution, testing, and release may be higher than other community collections. The related community collections (such as community.general and community.network) have less-stringent requirements and are a great place for new functionality that may become part of the Ansible-maintained collection in a future release. The following scenarios use the arista.eos to help explain when to contribute to the Ansible-maintained collection, and when to propose your change or idea to the related community collection: You want to fix a problem in the arista.eosAnsible-maintained collection. Create the PR directly in the arista.eos collection GitHub repository. Apply all the merge requirements. You want to add a new Ansible module for Arista. Your options are one of the following: Propose a new module in the arista.eoscollection (requires approval from Arista and Red Hat). Propose a new collection in the aristanamespace (requires approval from Arista and Red Hat). Propose a new module in the community.networkcollection (requires network community approval). Place your new module in a collection in your own namespace (no approvals required). Most new content should go into either a related community collection or your own collection first so that is well established in the community before you can propose adding it to the arista namespace, where inclusion and maintenance criteria are much higher. Requirements to merge your PR Your PR must meet the following requirements before it can merge into an Ansible-maintained collection: The PR is in the intended scope of the collection. Communicate with the appropriate Ansible sponsor listed in the Ansible-maintained collection table for help. For network and security domains, the PR follows the resource module development principles. Passes sanity tests and tox. Passes unit, and integration tests, as listed in the Ansible-maintained collection table and described in Resource module integration tests. Follows Ansible guidelines. See Should you develop a module? and Developing collections. Addresses all review comments. Includes an appropriate changelog.
https://docs.ansible.com/ansible/5/community/contributing_maintained_collections.html
2022-08-07T21:31:58
CC-MAIN-2022-33
1659882570730.59
[]
docs.ansible.com
Software Download Directory frevvo Latest - This documentation is for frevvo v10.0. Not for you? Earlier documentation is available too. The frevvo product installation can be customized in many ways. For example, OEM partners can brand frevvo with their own company images and look. frevvo properties make it easy to implement many of the common customizations that customers and OEM partners will want to consider. frevvo Data API provides a programmatic protocol for viewing and managing resources such as tenants, users, applications, forms, schemas, etc. frevvo only supports/certifies frevvo running in the tomcat container. Refer to our Supported Platforms for the list of Application Servers supported/certified by frevvo. Customization properties can be changed in the frevvo <frevvo-home>/tomcat/conf/frevvo-config.properties file. This keeps all your modified parameters in one place and makes it easy to upgrade frevvo to newer releases. # SMTP Settings frevvo.mail.from.email= frevvo.mail.bounce.email= frevvo.mail.debug=false frevvo.actions.debug=true frevvo.rule.debug=true # frevvo schema name settings - needed for 6.0 upgrade only frevvo.users.schemaName=users # SQL Server schema name #frevvo.users.schemaName=users.dbo # HTTP Proxy Configuration for licensing #frevvo.proxy.host= #frevvo.proxy.port= #frevvo.proxy.username= #frevvo.proxy.password= #frevvo.proxy.ntlm=false # Insight settings insight.enabled=true insight.server-url= # File Connector settings frevvo.filesystem.connector.url= # Box Connector settings #frevvo.box.connector.client.id= #frevvo.box.connector.client.secret= # Sharepoint Connector settings frevvo.sharepoint.connector.url= On This Page: For the most common configuration customization tasks that every customer and OEM partner will want to consider see Installation Tasks. The frevvo Data API enables programmatic access to the all resources and data stored in the frevvo server. The API provides a simple protocol for viewing and managing resources such as forms, applications, schemas, etc. OEM partners as well as end user customers can use the API to extend the features and provide tighter integration with other applications. You will need to install a client library in order to use the API. VAR and OEM partners can brand frevvo with their own company images and look. This is accomplished via frevvo web application branding properties. In the frevvo-tomcat bundle, the frevvo-config.properties file is located in: <frevvo-home>\tomcat\conf. The configuration properties follow a simple property name=value syntax. Follow these steps to convert configuration parameters from previous releases to properties in the frevvo-config.properties file. Edit the frevvo-config.properties file. Consider the frevvo.menu.bar property. If added to the frevvo-config.properties file, with a value of false, prevents the menu bar from being rendered. Any of the menu items (Downloads,Templates, Help, Docs, Forum, Contact) can be removed from the menu bar via configuration properties. If all of the menu items are disabled, the menu bar should be removed. frevvo.menu.bar=false If you choose to brand at the tomcat container level, frevvo.logo.url=<url to your image> These are the brandable parameters and their defaults: Notice that several of the parameters use Url templates. For example, frevvo.help.url references a help file named designer.xsl that is located in the directory <frevvo-home>\tomcat\webapps\frevvo\WEB-INF\xsl\main\help. You may wish to bundle up a replacement help file and store your replacement file in that same directory. In that case you will keep the templates #{servletContext.contextPath}${servlet.path}/static that are part of the default path and append your own help file name. It can be an html file, myAppHelp.html. Or if your help is an external file you can replace the entire default value #{servletContext.contextPath}${servlet.path}/static/help/designer with a Url such has. Refer to Installation Tasks for the steps to replace the file after parameters in designer.xsl have been editied. Certain menu items are links to external Urls. Examples are Downloads, Forum, Docs etc. It is possible to completely remove any of these menu items by deleting the URL or setting the appropriate property. If no URL exists the menu item will be hidden. For example, if you want to hide the Downloads menu item: frevvo.downloads.menu=false For example if you wish to hide the Docs menu item, edit the frevvo.docs.url and delete the content: frevvo.docs.url= Note:No URL set will hide the Docs menu and top link The look & feel of the frevvo application is controlled via css. frevvo.css.url gives you the ability to add your own style sheet if you need to customize the look & feel of the page itself such as the background colors and sizes of the items on the pages. You can use a tool such as firebug to learn how the page is styled with css and thus how to override the default styling. The Powered by frevvo™ logo can be customized via the frevvo.poweredby.markup branding parameter. If this branding parameter is an empty string no logo will appear on any form. The logo can still be turned off on any given form via the Show Logo form property. The frevvo designer form and doc action buttons can be customized using the properties listed below. By default all wizards are visible in the form & doc action buttons page. To hide a wizard, remove it from the associated property below. <!-- Wizards --> frevvo.formaction.wizards=closePage,displayMessage,goToPage,goToPaypal,formPost,echoUsingGoogleDocument,createConfluencePage,mergeToConfluencePage - Which form action wizards are displayed for forms frevvo.erroraction.wizards=displayErrorMessage,goToErrorPage - Which error action wizards are displayed frevvo.docaction.wizards=doNothing,emailDocumentDefault,docPost,emailDocumentGoogle,saveToPaperVision,saveToGoogleDocuments - Which doc action wizards are displayed for forms frevvo.docuri.wizards=unsetDocUris,saveToGoogleSpreadsheets,manualDocUris - Which doc uri wizards are displayed frevvo.flows.formaction.wizards=closePage,displayMessage,goToPage,formPost,createConfluencePage,mergeToConfluencePage - Which form action wizards are displayed for workflows frevvo.flows.erroraction.wizards=displayErrorMessage,goToErrorPage - Which error action wizards are displayed for workflows frevvo.flows.docaction.wizards=doNothing,emailDocumentDefault,docPost,saveToPaperVision,saveToGoogleDocuments - Which doc action wizards are displayed for workflows frevvo.flows.docuri.wizards=unsetDocUris,manualDocUris - Which doc uri wizards are displayed</description> Hiding Data Sources on the designer screen can be accomplished by adding the ?_method=post&edit=true and &showDS=false parameter to the edit link of a form. Create the URL using the steps below: Then copy and paste it in another tab of the browser. You have to add it to the Edit link, you cannot click Edit first and then add this parameter to the link that appears in your browser. Here are examples of the URLs:DS=falsePalette=false You can use the frevvo.page.title context parameter in the frevvo.xml file to change the HTML prefixes of the titles of all frevvo pages. The Preview Page in the designer will display with value of the the frevvo.page. title parameter - <browser name>. For example, to change the HTML page prefix to a company name, follow the steps below: frevvo.page.title="Company Name" 4. Save the file and restart frevvo. You can modify the text of runtime messages and customize the labels in the Form/workflow designers by changing string name/value pairs in the default file found in the directory <frevvo home>\tomcat\webapps\frevvo. You can customize the Form designer and most labels (but not all) in the workflow designer. The English version of the modified strings appear on the UI once frevvo is restarted. For example, if you wanted to: The default file contains all of the runtime and designer strings that can be customized. Both requirements listed above can be accomplished by modifying this file. Follow these steps: To change the text of the "Access denied. Authentication required" message locate the Error messages section in the file. Notice that the strings on the left side of the '=' have spaces escaped with the '\' character. This is needed so do not remove that. The escape character is not needed on the right side of the '='. Enter the text that you want on the right side of the '=' # Error messages Access\ Denied.\ Authentication\ required.= Please Sign into frevvo To change the labels in the Forms designer, locate the Form Designer toolbox section of the file. Enter the labels that you want to display to the right of the '=' # Form Designer Toolbox Palette=Palette Controls Custom=My Custom Controls Properties=Control Properties Data\ Sources=Schemas Drag\ and\ drop\ controls\ from\ the\ form\ into\ the\ header\ above.\ You\ can\ then\ re-use\ them\ in\ other\ forms.= Drop\ Submit\ buttons\ from\ the\ palette\ to\ add\ to\ the\ form.= Drop\ controls\ from\ the\ palette\ to\ add\ to\ the\ form.= Drag\ and\ drop\ controls\ from\ the\ palette\ into\ the\ form.= Drag\ and\ drop\ controls\ from\ the\ palette\ or\ from\ the\ form.= Replace the original file with the updated version. Restart your frevvo server. Customized Labels in the Forms Designer The parameters, frevvo.db.check.encoding and frevvo.db.encoding.error, have been added to the frevvo-config.properties file included in the <frevvo-home>\tomcat\webapps\frevvo. These parameters can be used to specify how to handle a check of database encoding and how to display the error if it is not UTF8. They are particularly useful for OEMs who may want to skip the fatal error by setting the frevvo.db.check.encoding parameter to false. You can set up these parameters to accomplish the following: By default frevvo does the check and will fail if not set correctly. 1. Perform the utf8 check with a fatal error. 2. Disable the UTF db encoding check completely. 3. Perform the check but not fail and simply log the warning in the <frevvo-home>\.tomcat\logs\frevvo.log file. Here is how it works: Follow these steps to set the parameters: <!-- Database Schema checking on startup --> <context-param> <param-name>frevvo.db.check.encoding</param-name> <param-value>true</param-value> <description>Check the database encoding on startup</description> </context-param> <context-param> <param-name>frevvo.db.encoding.error</param-name> <param-value>true</param-value> <description>If encoding is checked and is wrong, then it is a fatal error, otherwise only a warning is logged</description> </context-param> These parameters in the frevvo.xml file would be: <Parameter name "frevvo.db.check.encoding" value="true" override="false"/> <Parameter name "frevvo.db.encoding.error" value="true" override="false"/> OEM partners can use the context parameter, frevvo.oem.branding.css to name a css class that will be placed onto the body of the frevvo UI pages (form designer, etc.). Follow the steps below to do this: It is recommended that you modify the default oem-branding.css file to make the desired changes to the frevvo application. Any aspect of this file can be changed. frevvo builds upon several 3rd party products. The <frevvo-home>\tomcat\webapps\frevvo file contains a WEB-IN\licenses with all the 3rd party licenses.
https://docs.frevvo.com/d/display/frevvo100/Installation+Customizations
2022-08-07T21:24:34
CC-MAIN-2022-33
1659882570730.59
[]
docs.frevvo.com
Teradici Cloud Access Software (Graphics) for Windows on AWS at
https://docs.teradici.com/taxonomy/term/115
2022-08-07T21:17:41
CC-MAIN-2022-33
1659882570730.59
[]
docs.teradici.com
UInt8 From Xojo Documentation Used to store unsigned 8-bit integer values. The default value is 0. Generally you will use the Integer data type (equivalent to Int32 on 32-bit apps or Int64 on 64-bit apps) or UInteger (equivalent to UInt32 on 32-bit apps or UInt64 on 64-bit apps). This size-specific integer data type is available for use with external OS APIs. Notes UInt8 values can range from 0 to 255 and use 1 byte. Comparing a UInt88. In this example the variable Distance is a UInt8: See Also UInteger, UInt16, UInt32, and UInt64 data types.
http://docs.xojo.com/index.php?title=UInt8&printable=yes
2022-08-07T21:33:47
CC-MAIN-2022-33
1659882570730.59
[]
docs.xojo.com
Metrics for MemoryDB The AWS/memorydb namespace includes the following Redis metrics. With the exception of ReplicationLag and EngineCPUUtilization, these metrics are derived from the Redis info command. Each metric is calculated at the node level. For complete documentation of the Redis info command, see See Also The following are aggregations of certain kinds of commands, derived from info commandstats. The commandstats section provides statistics based on the command type, including the number of calls. For a full list of available commands, see redis commands
https://docs.aws.amazon.com/memorydb/latest/devguide/metrics.memorydb.html
2022-08-07T22:15:31
CC-MAIN-2022-33
1659882570730.59
[]
docs.aws.amazon.com
Definition at line 34 of file env_vars.h. Get an environment variable as a specific type, if set correctly. Get a string environment variable, if it is set. Definition at line 120 of file env_vars.cpp. Get a double from an environment variable, if set. Get the list of pre-defined environment variables. Definition at line 60 of file env_vars.cpp. References predefinedEnvVars. Referenced by DIALOG_CONFIGURE_PATHS::OnHelp(). Determine if an environment variable is "predefined", i.e. if the name of the variable is special to KiCad, and isn't just a user-specified substitution name. Definition at line 48 of file env_vars.cpp. References predefinedEnvVars. Referenced by DIALOG_CONFIGURE_PATHS::AppendEnvVar(), and DIALOG_CONFIGURE_PATHS::OnRemoveEnvVar(). Look up long-form help text for a given environment variable. This is intended for use in more verbose help resources (as opposed to tooltip text) Definition at line 108 of file env_vars.cpp. References initialiseEnvVarHelp(). Referenced by DIALOG_CONFIGURE_PATHS::OnHelp().
https://docs.kicad.org/doxygen/namespaceENV__VAR.html
2022-08-07T21:33:31
CC-MAIN-2022-33
1659882570730.59
[]
docs.kicad.org
geo.destination( p as cts.point, bearing as Number, distance as Number, [options as String[]] ) as cts.point Returns the point at the given distance (in units) along the given bearing (in radians) from the starting point..destination(sf, 1.22100904274442, geo.distance(sf, ny)); => cts:point("40.009335,-72.997467") Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/geo.destination
2022-08-07T22:42:32
CC-MAIN-2022-33
1659882570730.59
[]
docs.marklogic.com
Guru Meditation Reports¶ Zaqar contains a mechanism whereby developers and system administrators can generate a report about the state of a running Zaqar executable. This report is called a Guru Meditation Report (GMR for short). Generating a GMR¶ For wsgi and websocket mode, a GMR can be generated by sending the USR2 signal to any Zaqar process with support (see below). The GMR will then be outputted standard error for that particular process. For example, suppose that zaqar-server has process id 8675, and was run with 2>/var/log/zaqar/zaqar-server-err.log. Then, kill -USR2 8675 will trigger the Guru Meditation report to be printed to /var/log/zaqar/zaqar-server-err.log. For uwsgi mode, user should add a configuration in Zaqar'] For example, you can specify “file_event_handler=/tmp/guru_report” and “file_event_handler_interval=1” in Zaqar’s conf file. A GMR can be generated by “touch”ing the file which was specified in file_event_handler. The GMR will then output to standard error for that particular process. For example, suppose that zaqar-server was run with 2>/var/log/zaqar/zaqar-server-err.log, and the file path is /tmp/guru_report. Then, touch /tmp/guru_report will trigger the Guru Meditation report to be printed to /var/log/zaqar/zaqar-server Extending the GMR¶ As mentioned above, additional sections can be added to the GMR for a particular executable. For more information, see the inline documentation about oslo.reports: oslo.reports
https://docs.openstack.org/zaqar/latest/admin/gmr.html
2022-08-07T22:33:03
CC-MAIN-2022-33
1659882570730.59
[]
docs.openstack.org
public interface Buffer<Context> extends Operation<Context> Aggregatorby the fact that it operates on unique groups of values. It differs by the fact that an Iteratoris provided and it is the responsibility of the operate(cascading.flow.FlowProcess, BufferCall)method to iterate overall all the input arguments returned by this Iterator, if any. For the case where a Buffer follows a CoGroup, the method operate(cascading.flow.FlowProcess, BufferCall)will be called for every unique group whether or not there are values available to iterate over. This may be counter-intuitive for the case of an 'inner join' where the left or right stream may have a null grouping key value. Regardless, the current grouping value can be retrieved through BufferCall.getGroup(). Buffer is very useful when header or footer values need to be inserted into a grouping, or if values need to be inserted into the middle of the group values. For example, consider a stream of timestamps. A Buffer could be used to add missing entries, or to calculate running or moving averages over a smaller "window" within the grouping. By default, if a result is emitted from the Buffer before the argumentsIterator is started or after it is completed ( argumentsIterator.hasNext() == false), non-grouping values are forced to null (to allow for header and footer tuple results). By setting BufferCall.setRetainValues(boolean)to truein the Operation.prepare(cascading.flow.FlowProcess, OperationCall)method, the last seen Tuple values will not be nulled after completion and will be treated as the current incoming Tuple when merged with the Buffer result Tuple via the Every outgoing selector. There may be only one Buffer after a GroupByor CoGroup. And there may not be any additional Everypipes before or after the buffers Every pipe instance. A PlannerExceptionwill be thrown if these rules are violated. Buffer implementations should be re-entrant. There is no guarantee a Buffer instance will be executed in a unique vm, or by a single thread. Also, note the Iterator will return the same TupleEntryinstance, but with new values in its child Tuple. As of Cascading 2.5, if the previous CoGroup uses a BufferJoinas the Joiner, a Buffer may be used to implement differing Joiner strategies. Instead of calling BufferCall.getArgumentsIterator()(which will return null), BufferCall.getJoinerClosure()will return an JoinerClosureinstance with direct access to each CoGrouped Iterator. ANY cleanup, flush, getFieldDeclaration, getNumArgs, isSafe, prepare void operate(FlowProcess flowProcess, BufferCall<Context> bufferCall) BufferCallpasses in an Iteratorthat returns an argument TupleEntryfor each value in the grouping defined by the argument selector on the parent Every pipe instance. TupleEntry entry, or entry.getTuple() should not be stored directly in a collection or modified. A copy of the tuple should be made via the new Tuple( entry.getTuple() )copy constructor. This method is called for every unique group, whether or not there are values in the arguments Iterator. flowProcess- of type FlowProcess bufferCall- of type BufferCall
http://docs.concurrentinc.com/cascading/3.3/javadoc/cascading-core/cascading/operation/Buffer.html
2022-08-07T22:25:31
CC-MAIN-2022-33
1659882570730.59
[]
docs.concurrentinc.com
Oracle Package Wizard is designed for creating wrapper classes for PL/SQL Packages. It greatly simplifies working with types and stored procedures containing in PL/SQL Packages. Oracle Package Wizard supports: To create a wrapper class, perform the following steps: Note that items with unsupported parameter types cannot be selected. They are grayed out. Use Numbers - when this option is checked, Wizard maps Oracle numbers with the precision larger than 15 to ftNumber. Otherwise, they are mapped to ftFloat. Use Integers - when this option is enabled, Wizard maps Oracle numbers with the precision less than 10 to ftInteger. Otherwise, they are mapped to ftFloat or ftNumber. Use TimeStamps - when this option is enabled, Wizard maps Oracle timestamps to ftTimeStamp, ftTimeStampTZ, or ftTimeStampLTZ. Otherwise, timestamps are mapped to ftDateTime. Use DataSets - when this option is enabled, Wizard uses TOraDataSet parameters to return Orcale cursors. Otherwise TOraCursor parameters are used. Use Unicode - when this option is enabled, Wizard creates fields of the ftWideString data type. Otherwise, ftString is used. Use variants as parameters - when this option is enabled, variants are used for all simple parameter types. Generate overloaded methods - when this option is enabled, overloaded methods are created. Otherwise, overloaded subprograms are mapped to the methods with different suffixes (1, 2, 3 and so on). Unchangedcase, CapitalizedCase, lowercase, UPPERCASE - these alternative options define character case in identifier names. Remove underscores - when this option is enabled, Wizard removes underscores from generated identifiers. Prefix objects with T - when this option is enabled, generated class names are prefixed with 'T'. Prefix parameters with A - when this option is enabled, method parameters are prefixed with 'A'. Generate code for all versions of Delphi - when this option is enabled, generated code is compatible with the following Delphi versions: Delphi 6, Delphi 7, Borland Developer Studio 2006, CodeGear Delphi 2007 for Win32. Otherwise, generated code will work surely only in the current version of Delphi. Generated code for - select Win32, CLR or Both to determine environments that generated code will be compatible with. Press the Generate button to generate classes for selected packages.
https://docs.devart.com/odac/package_wizard.htm
2022-08-07T21:36:44
CC-MAIN-2022-33
1659882570730.59
[]
docs.devart.com
Date: Sun, 7 Aug 2022 23:04:16 +0000 (UTC) Message-ID: <547328523.6648.1659913456440@93e1396c9615> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_6647_743735740.1659913456440" ------=_Part_6647_743735740.1659913456440 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Contents:=20 trueif the first argument is less tha= n but not equal to the second argument. Equivalent to the <= operator. Since the function returns a Boolean value, it can be used as a fun= ction or a conditional. NOTE: Within an expression, you might choose to use the= corresponding operator, instead of this function. For more information, se= e Comparison Operators.<= /p> keep row: LESSTHAN(Errors, 10) Output: Keeps all rows in which the value in the&n= bsp; Errors column is less than 10. derive type:single value:LESS= THAN(value1, value2) For more information on syntax standards, see Language Documentation Syntax Note= s. Names of the column, expressions, or literals to compare. Usage Notes:=20 Tip: For additional examples, see Common Tasks.=20 This simple example demonstrate av= ailable fun= ction: derive type:single value:LESSTHAN(c= olA, colB) as:'lt'=20 derive type:single value:LESSTHANEQ= UAL(colA, colB) as:'lte'=20 derive type:single value:EQUAL(colA= , colB) as:'eq'=20 derive type:single value:NOTEQUAL(c= olA, colB) as:'neq'=20 derive type:single value:GREATERTHA= N(colA, colB) as:'gt'=20 derive type:single value:GREATERTHA= NEQUAL(colA, colB) as:'gte' Results: In the town of Circleville, citize= ns are allowed to maintain a single crop circle in their backyard, as long = as it confirms to the town regulations. Below is some data on the size of c= rop circles in town, with a separate entry for each home. Limits are displa= yed in the adjacent columns, with the inclusive columns= indicating whether the minimum or maximum values are inclusive. Tip: As part of this exercise, you can see how to you c= an extend your recipe to perform some simple financial analysis of the data= . Source: Transform: After the data is loaded into the Transformer page, you can begin compar= ing column values: derive type:single value: LESSTHANE= QUAL(Radius_ft,minRadius_ft) as:'tooSmall'While accurate, the a= bove transform does not account for the minInclusivevalu= e, which may be changed as part of your steps. Instead, you can delete= the previous transform and use the following, which factors in the other c= olumn:=20 derive type:single value: IF(minInc= lusive =3D=3D 'Y',LESSTHANEQUAL(Radius_ft,minRadius_ft),LESSTHA= N(Radius_ft,minRadius_ft)) as:'tooSmall'In this c= ase, the IFfunction tests whether the minimum value is i= nclusive (values of 10are allowed). If so, the LES= STHANEQUALfunction is applied. Otherwise, the LESSTHAN= code> function is applied. For the maximum limit, the following step applie= s:=20 derive type:single value: IF(= maxInclusive =3D=3D 'Y',GREATERTHANEQUAL(Radius_ft,maxRadius_f= t),GREATERTHAN(Radius_ft,maxRadius_ft)) as:'tooBig'Now, = you can do some analysis of this data. First, you can insert a column conta= ining the amount of the fine per foot above the maximum or below the minimu= m. Before the first derivecommand, insert the following,= which is the fine ( $15.00) for each foot above or below the l= imits:=20 derive type:single value: 15 as:'fi= neDollarsPerFt'At the end of the recipe, add the following new = line, which calculates the fine for crop circles that are too small:=20 derive type:single value: IF(tooSma= ll =3D=3D 'true', (minRadius_ft - Radius_ft) * fineDollarsPerF= t, 0.0) as: 'fine_Dollars'The above captures the too-small viol= ations. To also capture the too-big violations, change the above to the fol= lowing:=20 derive type:single value: IF(= tooSmall =3D=3D 'true', (minRadius_ft - Radius_ft) * fineDollarsPerFt, if(t= ooBig =3D=3D 'true', (Radius_ft - maxRadius_ft) * fineDollarsPerFt, '0.0')) as: 'fine_Dollars'In place of the orig= inal "false" expression ( 0.0), the above adds the test for the= too-big values, so that all fines are included in a single column. You can= reformat the fine_Dollarscolumn to be in dollar format:=20 set col: fine_Dollars value: NUMFORMAT(fine_Doll= ars, '$###.00') Results: After you drop the columns used in the calculation and move the remainin= g ones, you should end up with a dataset similar to the following: Now that you have created all of the computations for generating t= hese values, you can change values for minRadius_ft , maxRadius_ft, and fineDollarsPer= Ftto analyze the resulting fine revenue. Before or after the = transform where you set the value for fineDollarsPerFt, y= ou can insert something like the following: set col: minRadius_ft value:'12.5' After the step is added, select the last line in the recipe. Then, you = can see how the values in the fineDollarscolumn have bee= n updated. =20=20
https://docs.trifacta.com/exportword?pageId=110758384
2022-08-07T23:04:16
CC-MAIN-2022-33
1659882570730.59
[]
docs.trifacta.com
Read the Docs Blog - Posts tagged security 2018-08-13T00:00:00Z ABlog>.</p> <p.</p> <p.<>
https://blog.readthedocs.com/archive/tag/security/atom.xml
2022-08-07T21:47:03
CC-MAIN-2022-33
1659882570730.59
[]
blog.readthedocs.com
checkQC.handlers.unidentified_index_handler module¶ - class checkQC.handlers.unidentified_index_handler. UnidentifiedIndexHandler(*args, **kwargs)[source]¶ Bases: checkQC.handlers.qc_handler.QCHandler The UnidentifiedIndexHandler will try to identify if an index is represented at to high a level in unidentified reads, and if that is the case try to pinpoint why that is. It will not output errors, but all information will be displayed as warnings, due to the difficulty of deciding what is an error or not in this context. For most cases the % of unidentified reads will be what is used to issue the error, and then the warnings from this handler can help in identifying the possible underlying cause. There are a number of different checks (or rules) in place, which will be checked if and index occurs more then the significance_threshold. The samplesheet will be checked to see if the index found matches and of the following rules: - Check if the dual indexes have been swapped - Check if the index has been reversed - Check if the index is the reverse complement - Check if the index is the complementary index - Check if the index is present in another lane It will ignore any indexes which have N’s in them. These are assumed to be read errors. always_warn_rule(tag, lane, percent_on_lane, **kwargs)[source]¶ We always want to warn about an index that is significantly represented. This rule will make sure that we do so, and all other rules will contribute extra information if there is any. :param tag: :param lane: :param percent_on_lane: :return:) evaluate_index_rules(tag, lane, samplesheet_searcher, percent_on_lane)[source]¶ Evaluates a list of ‘rules’ and yields all warnings found by these rules. :param tag: :param lane: :param samplesheet_searcher: :param percent_on_lane: :return: generator of QCErrorFatal number_of_reads_per_lane()[source]¶ Transform conversion results into dict of lane -> total clusters pass filer :return: dict {<lane>: <total clusters pass filer>} validate_configuration()[source]¶ This overrides the normal configuration which looks for warning/error. :return: None :raises: ConfigurationError if the configuration for the handler was not valid.
https://checkqc.readthedocs.io/en/latest/checkQC.handlers.unidentified_index_handler.html
2022-08-07T21:13:20
CC-MAIN-2022-33
1659882570730.59
[]
checkqc.readthedocs.io
Active Directory with Connector Appliance (preview) You can use Connector Appliance to connect a resource location to forests which do not contain Citrix Virtual Apps and Desktops resources. For example, in the case of Citrix Secure Private Access customers or Citrix Virtual Apps and Desktops customers with some forests only used for user authentication. In this preview of multi-domain Active Directory with Connector Appliance, the following restrictions apply: - Connector Appliance cannot be used in place of Cloud Connectors in forests that contain VDAs. RequirementsRequirements Active Directory requirements - Joined to an Active Directory domain that contains the resources and users that you use to create offerings for your users. For more information, see Deployment scenarios for Connector Appliances in Active Directory in this article. - Each Active Directory forest you plan to use with Citrix Cloud must always be reachable by two Connector Appliances. - The Connector Appliance must be able to reach domain controllers in both the forest root domain and in the domains that you intend to use with Citrix Cloud. For more information, see the following Microsoft support articles: - How to configure domains and trusts - “Systems services ports” section in Service overview and network port requirements for Windows - Use universal security groups instead of global security groups. This configuration ensures that user group membership can be obtained from any domain controller in the forest. Ensure LDAPS is supported in all domain controllers. The Connector Appliance uses the encrypted LDAPS protocol to make Active Directory connections. To enable this protocol, ensure that every domain controller in the forest has valid certificates installed. For more information, see Enable LDAP over SSL. If the certificates are not installed, joining the domain fails with the message “Authentication error. Check your credentials and try again”. Network requirements - Connected to a network that can contact the resources you use in your resource location. - Connected to the Internet. For more information, see System and Connectivity Requirements. Supported Active Directory functional levelsSupported Active Directory functional levels Connector Appliance has been tested and is supported with the following forest and domain functional levels in Active Directory. Other combinations of domain controller, forest functional level, and domain functional level have not been tested with the Connector Appliance. However, these combinations are expected to work and are also supported in this preview. Connect an Active Directory domain to Citrix Cloud by using Connector Appliance (preview)Connect an Active Directory domain to Citrix Cloud by using Connector Appliance (preview) To configure Active Directory to connect to Citrix Cloud through the Connector Appliance, complete the following steps. Install a Connector Appliance in your resource location. You can follow the information in the Connector Appliance product documentation. Connect to the Connector Appliance administration webpage in your browser by using the IP address provided in the Connector Appliance console. In the Active Directory domains section, click + Add Active Directory domain. Enter the domain name in the Domain Name field. Click Add. The Connector Appliance checks the domain. If the check is successful, the Join Active Directory dialog opens. Enter the user name and password of an Active Directory user that has join permission for this domain. The Connector Appliance suggests a machine name. You can choose to override the suggested name and provide your own machine name that is up to 15 characters in length. This machine name is created in the Active Directory domain when the Connector Appliance joins it. Click Join. The domain is now listed in the Active Directory domains section of the Connector Appliance UI. To add more Active Directory domains, select to + Add Active Directory domain and repeat the preceding steps. If you have not already registered your Connector Appliance, continue with the steps as described in Register your Connector Appliance with Citrix Cloud. If you receive an error when joining the domain, verify that your environment fulfils the Active Directory requirements and the network requirements. What’s nextWhat’s next You can add more domains to this Connector Appliance. Note: For this preview, the Connector Appliance is tested with up to 10 forests. For resilience, add each domain to more than one Connector Appliance in each resource location. Viewing your Active Directory configurationViewing your Active Directory configuration You can view the configuration of the Active Directory domains and Connector Appliances in your resource locations in the following places: In Citrix Cloud: - In the menu, go to the Identity and Access Management page. Go to the Domains tab. Your Active Directory domains are listed with the resource locations they are part of. In the Connector Appliance webpage: - Connect to the Connector Appliance webpage by using the IP address provided in the Connector Appliance console. - Log in with the password you created when you first registered. - In the Active Directory domains section of the page, you can see the list of Active Directory domains this Connector Appliance is joined to. Removing an Active Directory domain from a Connector ApplianceRemoving an Active Directory domain from a Connector Appliance To leave an Active Directory domain, complete the following steps: - Connect to the Connector Appliance webpage by using the IP address provided in the Connector Appliance console. - Log in with the password you created when you first registered. - In the Active Directory domains section of the page, find the domain you want to leave in the list of joined Active Directory domains. - Note the name of the machine account created by your Connector Appliance. - Click the delete icon (trashcan) next to the domain. A confirmation dialog appears. - Click Continue to confirm the action. - Go to your Active Directory controller. - Delete the machine account created by your Connector Appliance from the controller. Deployment scenarios for using Connector Appliance with Active DirectoryDeployment scenarios for using Connector Appliance with Active Directory You can use both Cloud Connector and Connector Appliance to connect to Active Directory controllers. The type of connector to use depends on your deployment. For more information about using Cloud Connectors with Active Directory, see Deployment scenarios for Cloud Connectors in Active Directory Use the Connector Appliance to connect your resource location to the Active Directory forest in the following situations: - You are setting up Secure Private Access. For more information, see Secure Private Access with Connector Appliance. - You have one or more forests that are only used for user authentication - You want to reduce the number of connectors required to support multiple forests - You need a Connector Appliance for other use cases Only users in one or more forests with a single set of Connector Appliances for all forests This scenario applies to Workspace Standard customers or customers using Connector Appliance for Secure Private Access. In this scenario, there are several forests that contain only user objects ( forest1.local, forest2.local). These forests do not contain resources. One set of Connector Appliances is deployed within a resource location and joined to the domains for each of these forests. - Trust relationship: None - Domains listed in Identity and Access Management: forest1.local, forest2.local - User logons to Citrix Workspace: Supported for all users - User logons to an on-premises StoreFront: Supported for all users Users and resources in separate forests (with trust) with a single set of Connector Appliances for all forests This scenario applies to Citrix Virtual Apps and Desktops customers with multiple forests. In this scenario, some forests ( resourceforest1.local, resourceforest2.local) contain your resources (for example, VDAs) and some forests ( userforest1.local, userforest2.local) contain only your users. A trust exists between these forests that allows users to log on to resources. One set of Cloud Connectors is deployed within the resourceforest1.local forest. A separate set of Cloud Connectors is deployed within the resourceforest2.local forest. One set of Connector Appliances is deployed within the userforest1.local forest and the same set is deployed within the userforest2.local forest. - Trust relationship: Bi-directional forest trust, or uni-directional trust from the resource forests to the user forests - Domains listed in Identity and Access Management: resourceforest1.local, resourceforest2.local, userforest1.local, userforest2.local - User logons to Citrix Workspace: Supported for all users - User logons to an on-premises StoreFront: Supported for all users In this article - Requirements - Supported Active Directory functional levels - Connect an Active Directory domain to Citrix Cloud by using Connector Appliance (preview) - What’s next - Viewing your Active Directory configuration - Removing an Active Directory domain from a Connector Appliance - Deployment scenarios for using Connector Appliance with Active Directory
https://docs.citrix.com/en-us/citrix-cloud/citrix-cloud-resource-locations/connector-appliance/active-directory.html?lang-switch=true
2022-08-07T22:30:11
CC-MAIN-2022-33
1659882570730.59
[]
docs.citrix.com
List of Hazelcast Metrics The table below lists the metrics with their explanations in alphabetical order. The metrics we listed below are collected per member and these metrics are specific to the local member from which you collect them. For example, the distributed data structure metrics reflect the local statistics of that data structure for the portion held in that member. It should be noted that some metrics can store cluster-wide agreed value, that is, they may show the values obtained by communicating with other members in the cluster. This type of metrics reflect the member’s local view of the cluster (consider split brain scenarios). The clusterStartTime is an example of this type of metrics, and its value in the local member is obtained by communicating with the master. Jet Engine.
https://docs.hazelcast.com/hazelcast/5.1/list-of-metrics
2022-08-07T21:51:29
CC-MAIN-2022-33
1659882570730.59
[]
docs.hazelcast.com
Enum StepCountingHillClimbingType - java.lang.Object - java.lang.Enum<StepCountingHillClimbingType> - org.optaplanner.core.config.localsearch.decider.acceptor.stepcountinghillclimbing.StepCountingHillClimbingType - All Implemented Interfaces: Serializable, Comparable<StepCountingHillClimbingType> public enum StepCountingHillClimbingType extends Enum<StepCountingHillClimbingType>Determines what increment the counter of Step Counting Hill Climbing. Enum Constant Detail SELECTED_MOVE public static final StepCountingHillClimbingType SELECTED_MOVEEvery selected move is counted. ACCEPTED_MOVE public static final StepCountingHillClimbingType ACCEPTED_MOVEEvery accepted move is counted. Note: If LocalSearchForagerConfig.getAcceptedCountLimit()= 1, then this behaves exactly the same as {link #STEP}. STEP public static final StepCountingHillClimbingType STEPEvery step is counted. Every step was always an accepted move. This is the default. EQUAL_OR_IMPROVING_STEP public static final StepCountingHillClimbingType EQUAL_OR_IMPROVING_STEP IMPROVING_STEP public static final StepCountingHillClimbingType IMPROVING_STEP Method Detail values public static StepCountingHillClimbingType[] values()Returns an array containing the constants of this enum type, in the order they are declared. This method may be used to iterate over the constants as follows: for (StepCountingHillClimbingType c : StepCountingHillClimbingType.values()) System.out.println(c); - Returns: - an array containing the constants of this enum type, in the order they are declared valueOf public static StepCountingHillClimbing
https://docs.optaplanner.org/8.20.0.Final/optaplanner-javadoc/org/optaplanner/core/config/localsearch/decider/acceptor/stepcountinghillclimbing/StepCountingHillClimbingType.html
2022-08-07T21:49:20
CC-MAIN-2022-33
1659882570730.59
[]
docs.optaplanner.org
Securing your OVHcloud account with two-factor authentication Find out how to improve security for your OVHcloud account by enabling two-factor authentication Find out how to improve security for your OVHcloud account by enabling two-factor authentication Last updated 21st July 2022 OVHcloud offers tools to optimise security for your account and services. You can enable two-factor authentication (2FA). This is linked to your username-password couple, and you can use it via a device: e.g. a smartphone, tablet, or security key. Find out more about the methods we offer, and how to enable them. You can enable one or more two-factor authentication methods to secure and control access to the OVHcloud Control Panel. We offer two different methods: via an OTP mobile application. Install an OTP mobile application on to your Android or iOS smartphone or tablet. Next, link the application to your OVHcloud account. Each time you try to log in to your account, the application will generate a one-time code valid for a short time period. Once you have linked the application to your account, your device no longer needs an internet connection for the codes to be generated. via a U2F security key. This method involves plugging a U2F USB security key into your computer each time you log in to your OVHcloud account. When you plug in the key, the authentication process takes place automatically. This method offers a higher level of security, as it is based on independent security hardware that is completely separate from your computer, smartphone or tablet. As a result, it is less exposed to the risk of hacking. Once you have added your first method, you can also add one or two other methods, so that you have more choice in how you log in to your account. When you add two-factor authentication for the first time, you are sent emergency codes. Please keep them saved somewhere safe. We recommend saving them in a password manager. You can delete or regenerate them via the OVHcloud Control Panel: As a reminder, please note that it is important to save these emergency codes and ensure that they are valid. If one of the security methods you have selected becomes unavailable (theft or loss of your mobile phone or security key), access to your account may be blocked. Once you have enabled two-factor authentication, the login screen will show the security method selected. If you would like to use another method, click Try another method. All of the methods you have enabled will then appear: If you have lost one of your devices (mobile phone/smartphone/security key) or it stops working, we advise using one of the other two-factor authentication methods enabled on your account. You can also use one of the security codes provided to you. Removing a device does not disable two-factor authentication. To avoid the risk of blocking access to your account, please check that you can use one of the following login methods before removing a device: via a working device via another working method of two-factor authentication via valid security codes To remove a device, please log in to the OVHcloud Control Panel. Click on your name in the top right-hand corner (first step on the image below), then click your initials (second step). Next, click Security (the first step on the image below), then click on the ... icon (second step) to the right of the device you want to delete, and finally, click Remove (third step). To disable two-factor authentication completely on your OVHcloud account, you will need to delete all of the devices entered, and also disable the emergency codes. To remove each device, please refer to the dedicated part of this guide. Once you have removed all your devices, disable the emergency codes by clicking the Disable 2FA codes button. If you no longer have valid devices and if you no longer have valid emergency codes, you can request two-factor authentication to be disabled by contacting our support teams. Before contacting us, you must gather the following documents: Once you have gathered your supporting documents, contact our OVHcloud support teams by calling +65 (3) 1638340. Your documents must be sent to us from an email address registered in your OVHcloud account. After verifying your documents, a support agent will manually disable two-factor authentication on your OVHcloud account and get back to you once done. As a matter of security, once the access is regain, we recommend that you re-enable two-factor authentication on your account as soon as possible.
https://docs.ovh.com/asia/en/customer/secure-account-with-2FA/
2022-08-07T21:43:57
CC-MAIN-2022-33
1659882570730.59
[]
docs.ovh.com
You are viewing documentation for version: 3.1 | This version works with Simplicity Studio 5 only. If you have Simplicity Studio 4, switch to 2.13. For extended documentation (User’s Guides, Code Examples) also switch to 2.13. | For additional versions, see Version History. sl_bt_evt_sync_openedSynchronization Indicates that a periodic advertising synchronization has been opened. Data Structures struct sl_bt_evt_sync_opened_s Data structure of the opened event. Macros #define sl_bt_evt_sync_opened_id 0x004200a0 Identifier of the opened event. Detailed Description Indicates that a periodic advertising synchronization has been opened. Data Structure Documentation ◆ sl_bt_evt_sync_opened_s struct sl_bt_evt_sync_opened_s Data structure of the opened event. Data Fields uint16_t sync Periodic advertising synchronization handle uint8_t adv_sid Advertising set identifier bd_addr address Address of the advertiser uint8_t address_type Advertiser address type. Values: 0: Public address 1: Random address uint8_t adv_phy Enum sl_bt_gap_phy_type_t. The advertiser PHY. Values: sl_bt_gap_1m_phy (0x1): 1M PHY sl_bt_gap_2m_phy (0x2): 2M PHY sl_bt_gap_coded_phy (0x4): Coded PHY, 125k (S=8) or 500k (S=2) uint16_t adv_interval The periodic advertising interval. Value in units of 1.25 ms Range: 0x06 to 0xFFFF Time range: 7.5 ms to 81.92 s uint16_t clock_accuracy Enum sl_bt_sync_advertiser_clock_accuracy_t. The advertiser clock accuracy. uint8_t bonding Bonding handle. Values: SL_BT_INVALID_BONDING_HANDLE (0xff): No bonding Other: Bonding handle
https://docs.silabs.com/bluetooth/3.1/group-sl-bt-evt-sync-opened
2022-08-07T23:18:42
CC-MAIN-2022-33
1659882570730.59
[]
docs.silabs.com
Contents: Contents: In the data grid, you can review how the current recipe applies to the individual columns in your sample. - The grid is the default view in the Transformer page of Trifacta® Self-Managed Enterprise Edition. - the card. Then, click Add. - To modify a suggested recipe step, select its suggestion card and click Edit. Column Menus.. Transformer Toolbar At the top of the data grid, you can use the toolbar to quickly build common transformations, filter the display, and other operations. See Transformer Toolbar. Trifacta Photon running environment, results can differ between executions of the same recipe due to its,. Selecting values You can click and drag to select values in a column: - Select a single value in the column to prompt a set of suggestions. - Select multiple values in a single column to receive a different set of suggestions. - See Selection Details reference in your recipe steps. Some transform steps, such as pivot and union, may make the original row information invalid or otherwise unavailable, which disables this option. See Source Metadata References.. Target Matching Bar When a target has been assigned to your recipe, you can review the expected.
https://docs.trifacta.com/display/r060/Data+Grid+Panel
2022-08-07T21:41:10
CC-MAIN-2022-33
1659882570730.59
[]
docs.trifacta.com
StartDominantLanguageDetectionJob Starts an asynchronous dominant language detection job for a collection of documents. Use the DescribeDominantLanguageDetectionJob operation to track the status of a job. Request Syntax { "ClientRequestToken": " string", "DataAccessRoleArn": " string", "InputDataConfig": { "InputFormat": " string", "S3Uri": " string" }, "JobName": " string", "OutputDataConfig": { "KmsKeyId": " string", "S3Uri": " string" }, "VolumeKmsKeyId": ". For more information, see. An identifier identifier with the DescribeDominantLanguageDet DescribeDominantLanguageDetectionJob operation.:
https://docs.aws.amazon.com/comprehend/latest/dg/API_StartDominantLanguageDetectionJob.html
2019-06-16T03:49:55
CC-MAIN-2019-26
1560627997533.62
[]
docs.aws.amazon.com
Highlight By using the Highlight feature, you can change the color of specific markers such as bars, lines or circles to stand out from others. Highlight feature is available in the following charts. - Bar - Line - Area - Ring / Pie - Histogram - Density Plot - Scatter - Bubble - Boxplot - Violin - Error Bar - Area Map - Long/Lat Map How to Use First, you need to assign a category column such as a character column to Color. Then, select "Highlight" menu from the menu dropdown. Check "Enable Highlight" to enable the Highlight feature. Choose values to highlight, and pick colors for each value. Click Apply to apply the highlight configuration.
https://docs.exploratory.io/viz/highlight.html
2019-06-16T02:43:03
CC-MAIN-2019-26
1560627997533.62
[array(['images/highlight1.png', None], dtype=object) array(['images/highlight2.png', None], dtype=object)]
docs.exploratory.io
If you have a Laravel application deployed srchas user userwith permissions to read the files of mysite.com - Laravel application mysite.comuses a database named dbwith user db_userand password db_pass dstis the destination server for mysite.comand has IP address 10.0.0.2 devis the user that will run mysite.comon server dst - The name of your local machine is local Notes: - We assume that the code of mysite.comis hosted on a git repo. - We'll use scpto copy files from a server to another one using your local machine as intermediate storage. If you're used to working with an FTP/SFTP client, you may use it instead (remember that Moss sets up your new server and therefore you can upload files using SFTP but not FTP, due to security reasons). - We assume your database engine is either MySQL or MariaDB. If that's not the case, contact us via our support chat. the website in the new server If you haven't done it yet, log into Moss and create your Laravel application. Ignore the section Deploy your web app, we'll do that later. - The root domain must be mysite.com - The git repository must be the one with the code of mysite.com - Choose 'MySQL' as the database engine, dbas the database name, db_useras the database user, and db_passas the password (you may change the database password if you want) - Fill out the remainder of the form according to your needs down to the minimum allowed by your DNS provider. In this way, your users will access your site on the new server sooner when you update mysite.com to point to IP address 10.0.0.2. In the figure. Copy your database In this step we'll create a copy of your current database to restore it on the new server afterwards. Get ready for the copy In first place, log into your current server via SSH: ssh [email protected] Assuming that mysite.com is in production, your users might be using your application and updating your database. If this is not the case, skip this and jump into section Dump your database on your current server. Usually, you won't want to lose data during the migration process, and therefore you should prevent your users from writing into your database after dumping its content. The most common ways to handle this are: - Enable maintenance mode in your application. In such mode, your users will see a message stating that you're running some maintenance tasks. Enabling the maintenance mode in Laravel is really easy, just run php artisan down. - Enable read-only mode in your application. This requires you to modify your application so that it rejects write operations but allows read operations. Hence your application will be partly available during the migration, but it's harder to implement. - Stop your current web server (e.g. sudo service apache2 stopor sudo service nginx stop). Your users won't be able to access your server while it's down, so you should warn them in advance that it won't be available for some time due to planned maintenance. Choose the option that better fits your use case and let's dump your database. Dump your database from your current server In general, you can dump your database in two ways: - Using a traditional database management tool. In such case, check out the documentation of your favorite tool, e.g. phpMyAdmin or MySQL Workbench. - Running commands from a shell. This is the option we detail in the following. If you're logged into src via SSH, dump the content of the database your application uses into a compressed file .sql.gz and copy it into your local machine: src$ mysqldump -u db_user -p --databases db_name | gzip > backup.sql.gz src$ exit local$ scp [email protected]:~/backup.sql.gz . Restore your database on your new server Now that you have a copy of your database in your own machine, you can restore it on your new server: local$ scp backup.sql.gz [email protected]:~/ local$ ssh [email protected] dst$ gunzip -c backup.sql.gz | mysql -u db_user -p dst$ exit Copy your 'storage' files Laravel applications save every persistent file (e.g. logs and user-generated content) inside a directory named storage/. In order to complete the migration, you must copy the content therein into your new server. local$ ssh [email protected] src$ cd <path_to_site>/storage src$ tar -czf storage.tgz . src$ exit local$ scp [email protected]:<path_to_site>/storage/storage.tgz . local$ scp storage.tgz [email protected]:~/ local$ ssh [email protected] dst$ cd sites/site.com/shared/storage/ dst$ tar zxvf ~/storage.tgz dst$ exit Deploy the website on the new server Log into your Moss account again and deploy the application on your new server. Check the application works on the new server Before updating your DNS records, you must check that your web applications is working fine on your new server. We'll test that by making mysite.com. check that your application is working. Test your application_2<< Now just wait for old DNS cache entries to expire (2 minutes in this example) and your Laravel application will be available to all your users from the new server managed by Moss. Congrats! 👍
https://docs.moss.sh/articles/1123567-move-your-laravel-application-to-a-server-managed-by-moss
2019-06-16T02:31:30
CC-MAIN-2019-26
1560627997533.62
[array(['https://downloads.intercomcdn.com/i/o/33350536/19e0d6ca71a728d87988aa50/cloudflare-set-ttl.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/43894579/0ab85e9382ecdbfa8b6f6c40/site_deploy_en.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/34232961/c1f62346ef1557c26dca817a/cloudflare-update-record.png', None], dtype=object) ]
docs.moss.sh
Algorithm Simulation System¶ GNSS-IMU-SIM is an IMU simulation project, which generates reference trajectories, IMU sensor output, GPS output, odometer output and magnetometer output. Users choose/set up the sensor model, define the waypoints and provide algorithms, and gnss-imu-sim can generated required data for the algorithms, run the algorithms, plot simulation results, save simulations results, and generate a brief summary. GitHub Link: GNSS-INS-SIM Use the browser’s back button to return.
https://openimu.readthedocs.io/en/latest/simulation.html
2019-06-16T03:59:45
CC-MAIN-2019-26
1560627997533.62
[]
openimu.readthedocs.io
Windows Defender Firewall with Advanced Security Deployment Guide Applies to - Windows 10 - Windows Server 2016 You can use the Windows Defender Firewall with Advanced Security MMC snap-in with devices running at least Windows Vista or Windows Server 2008 to help protect the devices and the data that they share across a network. You can use Windows Defender Firewall to control access to the device from the network. You can create rules that allow or block network traffic in either direction based on your business requirements. You can also create IPsec connection security rules to help protect your data as it travels across the network from device to device. About this guide This guide is intended for use by system administrators and system engineers. It provides detailed guidance for deploying a Windows Defender Firewall with Advanced Security design that you or an infrastructure specialist or system architect in your organization has selected. Begin by reviewing the information in Planning to Deploy Windows Defender Firewall with Advanced Security. If you have not yet selected a design, we recommend that you wait to follow the instructions in this guide until after you have reviewed the design options in the Windows Defender Firewall with Advanced Security Design Guide and selected the one most appropriate for your organization. After you select your design and gather the required information about the zones (isolation, boundary, and encryption), operating systems to support, and other details, you can then use this guide to deploy your Windows Defender Defender Firewall with Advanced Security Design Plan to determine how best to use the instructions in this guide to deploy your particular design. Caution: We recommend that you use the techniques documented in this guide only for GPOs that must be deployed to the majority of the devices device accounts that are members of an excessive number of groups; this can result in network connectivity problems if network protocol limits are exceeded. What this guide does not provide This guide does not provide: Guidance for creating firewall rules for specific network applications. For this information, see Planning Settings for a Basic Firewall Policy in the Windows Defender Firewall with Advanced Security Design Guide. Guidance for setting up Active Directory Domain Services (AD DS) to support Group Policy. Guidance for setting up certification authorities (CAs) to create certificates for certificate-based authentication.. For more information about Windows Defender Firewall with Advanced Security, see Windows Defender Firewall with Advanced Security Overview. Feedback Send feedback about:
https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-firewall/windows-firewall-with-advanced-security-deployment-guide
2019-06-16T02:58:55
CC-MAIN-2019-26
1560627997533.62
[]
docs.microsoft.com
Problem Changes to the newrelic.ini file are not taking effect immediately. Solution Restart your web server (Apache, Nginx, PHP-FPM, etc.) after you make any changes to your INI settings. Cause When your web server (Apache, Nginx, PHP-FPM, etc.) first starts up and initializes PHP, it reads all of the INI settings. It also sets the global defaults for any missing settings. Apache then creates a pool of "worker" processes to deal with requests. These worker processes inherit the settings set during initialization. You have no way of knowing exactly which worker process will deal with a given request. When you make INI file changes, there may still be hundreds of worker processes left with the old settings, and the main Apache process itself (which will periodically kill existing and spawn new worker processes) also has the original INI settings. Until you restart your Apache server, most changes to your INI files will go unnoticed. The only exception is if you use PHP's "per-directory" setting mechanism using .htaccess files. Such settings are rare.
https://docs.newrelic.com/docs/agents/php-agent/troubleshooting/ini-settings-not-taking-effect-immediately
2019-06-16T02:43:48
CC-MAIN-2019-26
1560627997533.62
[]
docs.newrelic.com
User contributions User contributions allow you to create a content editing interface for site members. This means that selected website visitors can create, edit and delete content. Even if they aren't editors and cannot access the administration interface or the on-site editing mode. There are several scenarios where you can use User contributions, for example: - Community news – you can create a list of news and allow community members to add news. See an example configuration. - Partner directory – your business partners can add reference projects to their profile on the website. - Intranet knowledge base – your employees can add knowledge base articles to the intranet portal. Implementation Use the following web parts to implement user contributions: - Contribution list – allows you to display a list of pages and a New page link. - Edit contribution – allows you to edit an existing page. Further reading - Example - Publishing community news describes how you can create a new section for publishing community news. - User contributions security describes security possibilities of the User contributions feature.
https://docs.kentico.com/k9/community-features/user-contributions
2019-06-16T03:38:09
CC-MAIN-2019-26
1560627997533.62
[]
docs.kentico.com
After installing the operating system on each node for the architecture that you choose to deploy, you must configure the network interfaces. We recommend that you disable any automated network management tools and manually edit the appropriate configuration files for your distribution. For more information on how to configure networking on your distribution, see the documentation . All nodes require Internet access for administrative purposes such as package installation, security updates, DNS, and NTP. In most cases, nodes should obtain internet access through the management network interface. To highlight the importance of network separation, the example architectures use private address space for the management network and assume that the physical network infrastructure provides Internet access via NAT or other method. The example architectures use routable IP address space for the provider (external) network and assume that the physical network infrastructure provides direct Internet access. In the provider networks architecture, all instances attach directly to the provider network. In the self-service (private) networks architecture, instances can attach to a self-service or provider network. Self-service networks can reside entirely within OpenStack or provide some level of external network access using NAT through the provider network.. You can modify these ranges and gateways to work with your particular network infrastructure. Network interface names vary by distribution. Traditionally, interfaces use “eth” followed by a sequential number. To cover all variations, this guide simply refers to the first interface as the interface with the lowest number and the second interface as the interface with the highest number. Unless you intend to use the exact configuration provided in this example architecture, you must modify the networks in this procedure to match your environment. Also, each node must resolve the other nodes by name in addition to IP address. For example, the controller name must resolve to 10.0.0.11, the IP address of the management interface on the controller node. Warning Reconfiguring network interfaces will interrupt network connectivity. We recommend using a local terminal session for these procedures. Note Your distribution does not enable a restrictive firewall by default. For more information about securing your environment, refer to the OpenStack Security Guide. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/mitaka/install-guide-ubuntu/environment-networking.html
2019-06-16T03:33:18
CC-MAIN-2019-26
1560627997533.62
[]
docs.openstack.org