content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Permissions examples¶
In order to better understand how the Kinto permission model works, it is possible to refer to this set of examples:
A Blog¶
Consider a blog where:
- A list of administrators can CRUD everything;
- Some moderators can create articles and update existing ones;
- Anybody can read.
The following objects are created:
- A bucket
servicedenuages_blog;
- A collection
articles;
- A group
moderatorswith members
fxa:<remy id>and
fxa:<tarek id>.
A Wiki¶
- Authenticated users can create, retrieve, update and delete anything;
- Everyone can read articles.
The following objects are created:
- A
wikibucket, where new groups can be created by authenticated users;
- An
articlecollection is created.
A Company Wiki¶
- Employees of the company can CRUD anything;
- Managers can add employees to the wiki;
- Other people don’t have access.
The following objects are created:
- A
companywikibucket;
- An
articlescollection;
- An
employeesgroup.
A microblogging¶
A microblog is a service to share short articles with people such as Twitter, Google+ or Facebook.
- The microblog administrator creates the bucket;
- Each collection is isolated from the others, and only one person have all permissions on all records;
- Records are private by default, and published to specific audiences.
The following objects are created:
- A
microblogbucket, where groups can be created by authenticated users;
- A single
articlecollection;
- A group
alexis_buddies, whose members are chosen by Alexis (a.k.a circle);
- Some records (messages) with specific visibility (public, direct message, private for a group)
Each time a user creates a new record, it needs to setup the ACLs attached to it.
With this model it is also possible to setup a shared microblogging
account giving record’s
write permission to a group of users.
Note
Another model could be to let users create their own collections of records.
Mozilla Payments tracking¶
For the payment tracking use case, three players are involved:
- The payment app, storing receipts for buyers and sellers;
- The selling app, reading receipts for a given seller;
- The buyer app reading receipts for a given buyer.
Users shouldn’t be able to write receipts themselves, sellers and users should only be able to read their owns.
The following objects are created:
- the
mozillabucket;
- the
paymentcollection.
This ensures every app can list the receipts of every buyer, and that each buyer can also list their receipts. However, only the payment application can create / edit new ones. | http://docs.kinto-storage.org/en/stable/tutorials/permission-setups.html | 2018-05-20T17:13:31 | CC-MAIN-2018-22 | 1526794863662.15 | [] | docs.kinto-storage.org |
Amazon ECS Task Execution IAM Role
The Amazon ECS container agent makes calls to the Amazon ECS API actions on your behalf, so it requires an IAM policy and role for the service to know that the agent belongs to you. The following actions are covered by the task execution role:
Calls to Amazon ECR to pull the container image
Calls to CloudWatch to store container application logs
The
AmazonECSTaskExecutionRolePolicy policy is shown below.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "*" } ] }
The Amazon ECS task execution role is automatically created for you in the console first-run experience; however, you should manually attach the managed IAM policy for tasks to allow Amazon ECS to add permissions for future features and enhancements as they are introduced. You can use the following procedure to check and see if your account already has the Amazon ECS task execution role and to attach the managed IAM policy if needed.
To check for the
ecsTaskExecutionRole in the IAM console
Open the IAM console at.
In the navigation pane, choose Roles.
Search the list of roles for
ecsTaskExecutionRole. If the role does not exist, use the procedure below to create the role. If the role does exist, select the role to view the attached policies.
Choose the Permissions tab. Ensure that the AmazonECSTaskExecutionRolePolicy managed policy is attached to the role. If the policy is attached, your Amazon ECS task execution role is properly configured. If not, follow the substeps below to attach the policy.
Choose Attach policy.
In the Filter box, type AmazonECSTaskExecutionRolePolicy to narrow the available policies to attach.
Check the box to the left of the AmazonECSTaskExecutionRolePolicy policy and choose Attach policy.
Choose the Trust relationships tab, and Edit trust relationship.
Verify that the trust relationship contains the following policy. If the trust relationship matches the policy below, choose Cancel. If the trust relationship does not match, copy the policy into the Policy Document window and choose Update Trust Policy.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }
To create the
ecsTaskExecutionRole IAM role
Open the IAM console at.
In the navigation pane, choose Roles and then choose Create role.
In the Select type of trusted entity section, choose Elastic Container Service.
For Select your use case, choose Elastic Container Service Task, then choose Next: Permissions.
In the Attach permissions policy section, search for AmazonECSTaskExecutionRolePolicy and select the policy and choose Next: Review.
For Role Name, type
ecsTaskExecutionRoleand choose Create role. | https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html | 2018-05-20T17:39:13 | CC-MAIN-2018-22 | 1526794863662.15 | [] | docs.aws.amazon.com |
Your Feedback Matters
We know how important our documentation is to your company’s success. We want to know what works for you and what doesn’t.
- Feedback forms: As you’re working with our documentation—whether it’s in the Salesforce Help, release notes, or developer guides at Developer Force—look for the feedback form and vote up or down. Add comments if you have them.
- Twitter: Tweet us at @salesforcedocs. | https://releasenotes.docs.salesforce.com/en-us/summer14/release-notes/rn_feedback_release_notes.htm | 2018-05-20T17:40:12 | CC-MAIN-2018-22 | 1526794863662.15 | [] | releasenotes.docs.salesforce.com |
GELF¶
Structured events from anywhere. Compressed and chunked.¶.
GELF via UDP¶
Chunking¶
UDP datagrams are usually limited to a size of 8192 bytes. A lot of compressed information fits in there but you sometimes might just have more information to send. This is why Graylog supports chunked GELF.
You can define chunks of messages by prepending a byte header to a GELF message including a message ID and sequence.
Prepend the following structure to your GELF message to make it chunked:
-.
Compression¶
When using UDP as transport layer, GELF messages can be sent uncompressed or compressed with either GZIP or ZLIB.
Graylog nodes detect the compression type in the GELF magic byte header automatically.
Decide if you want to trade a bit more CPU load for saving a lot of network bandwidth. GZIP is the protocol default.
GELF via TCP¶
At the current time, GELF TCP only supports uncompressed and non-chunked payloads. Each message needs to be delimited with a null byte (
\0) when sent in the same TCP connection.
Attention
GELF TCP does not support compression due to the use of the null byte (
\0) as frame delimiter.
GELF Payload Specification¶
Version 1.1 (11/2013)
A GELF message is a JSON string with the following fields:
-
- the current timestamp (now) by¶
This is an example GELF message payload. Any graylog-server node accepts and stores this as a message when GZIP/ZLIB compressed or even when sent uncompressed over a plain socket (without newlines):
{ "version": "1.1", "host": "example.org", "short_message": "A short message that helps you identify what is going on", "full_message": "Backtrace here\n\nmore stuff", "timestamp": 1385053862.3072, "level": 1, "_user_id": 9001, "_some_info": "foo", "_some_env_var": "bar" }
Sending GELF messages via UDP using netcat¶
Sending an example message to a GELF UDP input (running on host
graylog.example.com on port 12201):
echo -n '{ "version": "1.1", "host": "example.org", "short_message": "A short message", "level": 5, "_some_info": "foo" }' | nc -w0 -u graylog.example.com 12201
Sending GELF messages via TCP using netcat¶
Sending an example message to a GELF TCP input (running on host
graylog.example.com on port 12201):
echo -n -e '{ "version": "1.1", "host": "example.org", "short_message": "A short message", "level": 5, "_some_info": "foo" }'"\0" | nc -w0 graylog.example.com 12201
Sending GELF messages via TCP using curl¶
Sending an example message to a GELF HTTP input (running on):
curl -X POST -H 'Content-Type: application/json' -d '{ "version": "1.1", "host": "example.org", "short_message": "A short message", "level": 5, "_some_info": "foo" }' '' | http://docs.graylog.org/en/2.1/pages/gelf.html | 2018-05-20T17:49:14 | CC-MAIN-2018-22 | 1526794863662.15 | [] | docs.graylog.org |
The Assertions of this test case are the following:
Assertion 1: Validate that there are two geo events both of which are violations (Overspeed, Excessive Breaking) in source. Validate there are two speeding events both of which are speeding (96, 83)
Assertion 2: Validate the Join of data between geo stream and speed streams
Assertion 3: Validate that the filter “EventType” detects that this is a “Violation Event”
Assertion 4: Validate the inputs of the window should be two events (geo/speed 1 with speed of 83, geo/speed 2 with speed of 96)
Assertion 5: Validate the result of the DriverAvgSpeed agregrate processor should be one event that represents the average of 83 and 96…89.5
Assertion 6: Validate the isDriverSpeeding rule recognizes it as speeding event (89.5) since it is greater than 80 and continue that event to custom round UDF
Assertion 7: Validate the output of the round UDF event should change the speed from 89.5 to 90.0 and that is the final event that goes to the sink.
Create a test named “Test-Multiple-Speeding-Events”, use the following test data for TruckGeoEvent and use the following test data for TruckSpeedEvent. | https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.0/bk_getting-started-with-stream-analytics/content/ch08s01s07.html | 2018-05-20T17:27:23 | CC-MAIN-2018-22 | 1526794863662.15 | [] | docs.hortonworks.com |
How to: Access Association and Initiation Form Data in a Workflow in SharePoint Server 2010 (ECM)
Applies to: SharePoint Server 2010
When a workflow instance starts, any association and initiation data gathered from the user is stored in an SPWorkflowActivationProperties object, which you can access through the WorkflowProperties property of the OnWorkflowActivated activity. This object contains standard information for every workflow, as well as custom data from the workflow definition XML or from workflow association and initiation forms. By accessing the SPWorkflowActivationProperties object variable you specify for the WorkflowProperties property, you can use the information passed to the workflow.
The SPWorkflowActivationProperties object contains a standard set of properties for every workflow instance in SharePoint Foundation, including HistoryListId, ItemId, TaskListId, and WorkflowId.
In addition, the object also contains strings, represented by the AssociationData and InitiationData properties, which store custom properties for the workflow association and initiation, respectively.
For Microsoft Office InfoPath forms, these properties return XML strings that conform to the schema of the form used to gather the data. To access these custom properties, you must write code that parses the XML string.
The workflow developer can decide what method to use to parse the XML string and identify the custom properties it contains. For this procedure, we will use a Visual Studio command line tool, xsd.exe, to generate a class based on the schema of the workflow form; then we will store incoming form data into an object of that type by deserializing the XML string returned by the InitiationData property of the SPWorkflowActivationProperties object.
For more information about setting activity properties, see the Windows Workflow Foundation SDK.
To access association or initiation form data in your workflow
Extract the schema of your InfoPath association or initiation form.
In InfoPath, open your saved and published workflow form.
To save the form as source files, command line tool, xsd.exe to generate a new class file, based on the form schema file (.xsd).
Open a Visual Studio command prompt. Click Start, point to All Programs, point to Microsoft Visual Studio 2010, point to Visual Studio Tools, and then click Visual Studio 2010 Command Prompt.
Note
By default, Visual Studio 2010 installs the xsd.exe command line tool to the following location, where C: represents the drive on which you have installed Visual Studio 2010: C:\Program Files\Microsoft Visual Studio 10\SDK\v2.0\Bin
Navigate to the location of the form schema (.xsd) file, and then run the following command: xsd myschema.xsd /c /o:output_directory
The xsd.exe tool generates a new class file based on the form schema. The file is named the same as the schema file, myschema.cs. The class in the file is named the same as the root element of the schema, which was named the same as the form fields collection.
Note
Specifying a unique name for the form fields collection, rather than using the default name of myfields, helps to ensure that the class generated from the form schema file also has a unique name. This is especially important when you are programming a workflow that uses multiple forms.
In Visual Studio, open your workflow project, and add the new class file to it.
Add code to your workflow that serializes a new instance of the new class, by using the workflow association or initiation data.
For example, the following code serializes a new object, of type InitForm, from the InitiationData property of a SPWorkflowActivationProperties object variable named workflowProps. This example assumes the developer has created a class, InitForms, whose schema matches the schema of the InfoPath form used to gather the initiation data.
using System.Xml.Serialization; using System.Xml; … XmlSerializer serializer = new XmlSerializer(typeof(InitForm)); XmlTextReader reader = new XmlTextReader(new System.IO.StringReader(workflowProps.InitiationData)); InitForm initform = (InitForm) serializer.Deserialize(reader);
Imports System.Xml.Serialization Imports System.Xml … Dim serializer As New XmlSerializer(GetType(InitForm)) Dim reader As New XmlTextReader(New System.IO.StringReader(workflowProps.InitiationData)) Dim initform As InitForm = CType(serializer.Deserialize(reader), InitForm)
Add code to your workflow that accesses the custom properties as properties of the class, based on the form schema.
The following code builds on the previous example. The code accesses three custom properties of the InitForm object and assigns them to string variables.
assignee = initform.assignee; instructions = initform.instructions; comments = initform.comments;
assignee = initform.assignee instructions = initform.instructions comments = initform.comments
See Also
Tasks
How to: Design InfoPath Workflow Forms (ECM)
How to: Design a Workflow Form to Use Association and Initiation Data in SharePoint Server 2010 (ECM)
Concepts
InfoPath Forms for Workflows (ECM)
Workflow Association and Initialization Forms in SharePoint Server 2010 (ECM) | https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2010/ms566880(v=office.14) | 2018-05-20T18:19:23 | CC-MAIN-2018-22 | 1526794863662.15 | [] | docs.microsoft.com |
Fail unless an exception of class exception_class is thrown by callable when invoked with arguments args and keyword arguments kwargs. If a different type of exception is thrown, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception. | https://docs.scipy.org/doc/numpy-1.6.0/reference/generated/numpy.testing.assert_raises.html | 2017-10-17T07:58:08 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.scipy.org |
Conventions followed in the Manual
Command line examples and program output are displayed in a box:
# {{web.html($some_output_here)}}
Menu options, button names, and other interactive controls are shown in boldface type. For example: Choose Configure... from the pop-up menu.
Boldface type also denotes literal portions of a command line; italic type denotes a variable that you must substitute with an actual value. For example,
Enter copy filename means that you should type "copy" followed by the actual file name you wish to copy, followed by the Enter key.
The term Localhost is used to refer to the server on which ZMC is installed and the term Remote host is used to refer to other servers that are to be backed up.
# {{web.html($some_output_here)}}
Retrieved from ""
Viewing Details: | http://docs.zmanda.com/Template:Typographical_Conventions | 2017-10-17T07:53:39 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.zmanda.com |
Puppet Server: Known Issues
Included in Puppet Enterprise 2015.3. A newer version is available; see the version menu above for details..
Config Reload
SERVER-15: In the current builds of Puppet Server, there is no signal handling mechanism that allows you to request a config reload/service refresh. In order to clear out the Ruby environments and reload all config, you must restart the service. This is expensive, and in the future we’d like to support some mechanisms for reloading rather than restarting., this property should be added
to the
JAVA_ARGS variable defined in either
/etc/sysconfig/puppetserver
or
/etc/default/puppetserver, depending on upon your distribution. Note that
the service will need to be restarted in order for this change to take effect... | https://docs.puppet.com/puppetserver/2.2/known_issues.html | 2017-10-17T07:42:18 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.puppet.com |
Cameo Simulation Toolkit 18.5 Documentation
Support for Monte Carlo Simulation
Support for Monte Carlo simulation provides:
- a config that automatically runs any other config more than one time,
- different distribution models, and
- random values initialization using the model.
Results from a running simulation can be recorded into a CSV file.
Figure 1: Running Monte Carlo simulation in CST.
Support for FMI 2.0 for Co-Simulation
CST has the capability to co-simulate FMU files version 2.0, represented as blocks in a SysML model.
Figure 2: Simulating FMU blocks in CST.
Timelines Export to File
Use the Export button to export Timelines. Results are shown in the new Result File option in the configuration settings. The available file formats are CSV, PNG, and HTML.
Figure 3: The new capability to export a Timeline chart as a file.
New MATLAB Java API for Integration
CST uses the new official MATLAB Engine (R2016b) Java API to ensure smooth integration with MATLAB.
Figure 4: MATLAB integration based on the new MATLAB Java API.
Support for Duration Constraints on Activities
CST supports duration constraints on Activities when the duration constraints on CallBehaviorActions do not exist. However, if constraints are put on both the Activities and CallBehaviorActions, only the CallBehaviorActions constraints will be used. | https://docs.nomagic.com/display/CST185/Upcoming%21+What%27s+New+in+Cameo+Simulation+Toolkit+19.0 | 2017-10-17T07:48:54 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.nomagic.com |
Users¶
It is recommended to create an account for each individual user accessing Graylog.
User accounts have the usual properties such as a login name, email address, full name, password etc. In addition to these fields, you can also configure the session timeout, roles and timezone.
Sessions¶
Each login for a user creates a session, which is bound to the browser the user is currently using. Whenever the user interacts with Graylog this session is extended.
For security reasons you will want to have Graylog expire sessions after a certain period of inactivity. Once the interval
specified by
timeout expires the user will be logged out of the system. Requests like displaying throughput statistics
do not extend the session, which means that if the user keeps Graylog open in a browser tab, but does not interact with it,
their session will expire as if the browser was closed.
Logging out explicitly terminates the session.
Timezone¶
Since Graylog internally processes and stores messages in the UTC timezone, it is important to set the correct timezone for each user.
Even though the system defaults are often enough to display correct times, in case your team is spread across different timezones, each user can be assigned and change their respective timezone setting. You can find the current timezone settings for the various components on the System / Overview page of your Graylog web interface.
Initial Roles¶
Each user needs to be assigned at least one role, which governs the basic set of permissions this user has in Graylog.
Normal users, which do not need to create inputs, outputs or perform administrative tasks like managing access control etc,
should be assigned the built in
Reader role in addition to the custom roles which grant access to streams and dashboards. | http://docs.graylog.org/en/2.3/pages/users_and_roles/users.html | 2017-10-17T07:33:50 | CC-MAIN-2017-43 | 1508187820930.11 | [array(['../../_images/create_user.png', '../../_images/create_user.png'],
dtype=object) ] | docs.graylog.org |
Relay uses a common pattern for mutations, where they.
© 2013–present Facebook Inc.
Licensed under the BSD License. | http://docs.w3cub.com/relay/graphql-mutations/ | 2017-10-17T07:31:32 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.w3cub.com |
Scaling
Nanobox is designed to make scaling your app's infrastructure simple. You have the ability to scale your app in multiple ways, each triggered through your Nanobox dashboard. Nanobox takes care of all the "heavy-lifing" behind the scenes.
The next few docs provide information on how to scale, when to scale, different scaling strategies, and other important information to help ensure your app is able to meet demand.
Bunkhouse Servers
Moving Components Out of a Bunkhouse
Scaling Methods
How to Scale
When to Scale
Reach out to [email protected] and we'll try to help. | https://docs.nanobox.io/scaling/ | 2017-10-17T07:38:35 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.nanobox.io |
- Unexpected, extra characters appear on my site
- There are strange characters appearing on my site
- My site looks different when viewed with some web browsers
Videos
- A Video on my site does not behave correctly
- A Video on my site takes too long to load
- Some visitors cannot view a Video on my site | http://docs.karelia.com/z/Troubleshooting.html | 2009-07-04T09:33:56 | crawl-002 | crawl-002-015 | [] | docs.karelia.com |
Amazon
Benefits of Amazon
If you're selling on Amazon, you can:
- receive customer's emails in Gorgias,
- respond from there.
How to connect Amazon?
-.
Showing orders next to tickets
You can also display Amazon, eBay and Walmart orders next to tickets using the integration created by ChannelReply ($31/month). Add Amazon order data here.
| https://docs.gorgias.com/ecommerce-integrations/amazon | 2020-09-18T23:57:19 | CC-MAIN-2020-40 | 1600400189264.5 | [array(['https://files.helpdocs.io/eQ5bRgGfSN/articles/I88bQBirDn/1572893487272/image.png',
None], dtype=object) ] | docs.gorgias.com |
The command ldd does not exist on Darwin, but the same functionality can be found by using otool -L. This will tell you which libraries a binary is linked against.
The ioreg command is very nifty indeed. It will traverse the IORegistry, showing you the devices connected to your system and the heirarchy of the IOKit handlers for each device. Even if the device is not recognized, it will show you that a device is there and some information about it.
This is very useful when trying to write an IOKit module and figuring out what the IONameMatch tag should be for the device you want to associate with.
synthfs is a simple in-memory filesystem. It can't hold data files, but will allow creation of directory trees. Handy if you're, oh, say, trying to manufacture transient mountpoints to provide a home for your disks, or even NFS mounts.
Drive numbers like disk2 are numbered in order as the drive becomes available. For machines with a mixture of ATA and SCSI disks and ATAPI CD-ROMs, the number can be non-deterministic based on how fast various devices spin up and come online. Disk numbering is not necessarily influenced by topology, so the Master/Slave relationships on an IDE bus may not always cause deterministic numbering.
Yes! Unfortunately, the support is just read-only right now. When in Linux you can mount your Darwin partition with the following command:
mount -t ufs -o ufstype=openstep -o ro /dev/darwindev /path/to/mount
There are also some tools for manipulating HFS+ filesystems from Linux. | https://docs.huihoo.com/darwin/opendarwin/faq/ch02s03.html | 2020-09-18T22:50:50 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.huihoo.com |
Teams is slow during video meetings on laptops docked to 4K/HDR monitors
Summary
Overall Teams performance on laptops may be affected during meetings that use video. This can occur if a laptop is docked to an external 4K or ultra-high-definition (also known as high-dynamic range or HDR) display.
Workaround
Reduce the resource requirements for your laptop to improve the Teams experience during the meeting. To do this, try one or more of the following methods:
- Close any other applications that are running video.
- Close browser tabs that you don’t need during the meeting.
- Disconnect your monitor from the port replicator or docking station, and directly connect it to the video port on the laptop, if available.
- Change the resolution of your 4K or HDR monitor temporarily to 1920 x 1080. Instructions for doing this on Windows are located at Change desktop icon size or screen resolution.
- Turn off your own video during the meeting. To do this, select Turn camera off in the meeting controls.
- Turn off incoming video during the meeting. To do this, select More actions … > Turn off incoming video in the meeting controls.
- Restart your laptop.
Resolution
Microsoft is actively working to resolve these issues, and will be releasing updates over the next several months. We’ll post more information in this article as it becomes available. | https://docs.microsoft.com/en-us/microsoftteams/troubleshoot/known-issues/teams-slow-video-meetings-laptops-4k | 2020-09-19T01:15:03 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.microsoft.com |
Deleting Pencil Line Textures
You can delete pencil texture palettes that you do not you wish to delete.
- Do one of the following:
- Click on the Remove Colour button.
- Right-click on the pencil texture and select Delete.
- Open the Colour view menu and select Colours > Delete.
- Press Del (Windows/Linux) or Backspace (macOS).
The selected pencil texture is removed from the pencil texture palette.
> delete.
- Do one of the following:
- Click on the Delete Texture
button.button.
- Open the Brush menu
and select Delete Texture.and select Delete Texture.
The selected pencil texture is removed from the pencil texture palette. | https://docs.toonboom.com/help/harmony-17/paint/drawing/delete-pencil-texture.html | 2020-09-19T00:07:22 | CC-MAIN-2020-40 | 1600400189264.5 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/Drawing/draw-pencil-texture-line-1.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Drawing/draw-pencil-texture-line-2.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Drawing/draw-pencil-texture-line-3.png',
None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/Character_Design/Pencil_Tool/pencil-tool-properties-dialog-button.png',
'Stroke Preview Stroke Preview'], dtype=object)
array(['../Resources/Images/HAR/Stage/Character_Design/Pencil_Tool/HAR11_Pencil_Preset_Texture.png',
'Pencil Preset Texture Pencil Preset Texture'], dtype=object)
array(['../Resources/Images/HAR/Stage/Drawing/new-pencil-texture-2.png',
None], dtype=object) ] | docs.toonboom.com |
If you just switched to export your Android project using Gradle instead of the old system, you may encounter build errors, especially if you are using additional Android libraries, or if you have added a custom AndroidManifest.xml.
The Android Gradle plug-in is much more picky than the old ADT/Ant system. It does not accept anything it considers an error, whether it’s duplicate symbols, references to resources that don’t exist, or a library project that sets the same attribute as the main application.
In most cases, fixing the problem involves editing an AndroidManifest.xml file; either the main one, or one from a library your project uses.
In a non-trivial project, or if the project has issues not described by the troubleshooting section below, export the project as a Gradle project (from Build Settings) and build from the command line. Building from the command line gives you more detailed error messages, and makes for a quicker turnaround when applying changes.
An AndroidManifest.xml file, either the main one or in a library, references a non-existing resource. Often it is the application icon or label string that is set by a library. This can happen if you have copied your main manifest to a library project without removing those references.
Remove the attribute from one of the Android Manifests – normally the one from the library.
You have a file name collision between your main application and a library project, or between two library projects. Keep in mind that all of the files are copied into the same APK package.
You need to remove one of the files.
A library can not use the same Java package as the main application, or any other library.
Usually, you should change the package name of the library to something different. If the library contains a lot of code, it may be easier to change the main package name (from Player Settings).
A library can not freely override attributes from the main AndroidManifest.xml. Often this error is caused by a library setting the application icon or label string, similar to the Resource not found problem above.
Either remove the attribute from the library, or add a tools:replace attribute to your application tag, to indicate how the merge conflict should be resolved. | https://docs.unity3d.com/ja/2018.1/Manual/android-gradle-troubleshooting.html | 2020-09-19T00:24:05 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.unity3d.com |
. There are separate snap channels for each major
release stream, e.g.
2.x,
3.x, as well as a
latest stream.
After installing snapd, the CouchDB snap can be installed via:
$ sudo snap install couchdb
CouchDB will be installed at
/snap/couchdb. Data will be stored at
/var/snap/couchdb/.
Please note that all other file system paths are relative to the snap `chroot` instead of the system root. In addition, the exact path depends on your system. For example, when you normally want to reference /opt/couchdb/etc/local.ini, under snap, this could live at /snap/couchdb/5/opt/couchdb/etc/local.ini.
Your installation is not complete. Be sure to complete the Setup steps for a single node or clustered installation.
Further details on the snap build process are available in our couchdb-pkg git repository. | http://docs.couchdb.com/en/latest/install/snap.html | 2020-09-18T23:00:36 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.couchdb.com |
Recommended Reading¶
The sections below contain information, or at least links to information, that should be helpful for anyone who wants to use Nexus, but who is not an expert in one of the following areas: installing python and related modules, installing PWSCF and QMCPACK, the Python programming language, and the theory and practice of Quantum Monte Carlo.
Helpful Links for Installing Python Modules¶
- Python itself
- Download: sure to get Python 2.x, not 3.x.
- Numpy and Scipy
- Download and installation:.
- Matplotlib
- Download::
- H5py
- Download and installation:
PWSCF: pw.x, pw2qmcpack.x, pw2casino.x¶
READMEfile in
qmcpack/external_codes/quantum_espressofor instructions on patching Quantum Espresso version 5.1.
QMCPACK: qmcpack, qmcpack_complex, convert4qmc, ppconvert, sqd¶
Wfconvert: wfconvert¶
VASP¶
GAMESS¶
Download: Install: See the “readme.unix” file in the GAMESS source distribution (gamess/machines/readme.unix).
Brushing up on Python¶
Python¶
Python is a flexible, multi-paradigm, interpreted programming language with powerful intrinsic datatypes and a large library of modules that greatly expand its functionality. A good way to learn the language is through the extensive Documentation provided on the python.org website. If you have never worked with Python before, be sure to go through the Tutorial. To learn more about the intrinsic data types and standard libraries look at Library Reference. A very short introduction to Python is in Basic Python Constructs.
NumPy¶
Other than the Python Standard Library, the main library/module Nexus makes heavy use of is NumPy. NumPy provides a convenient and fairly fast implementation of multi-dimensional arrays and related functions, much like MATLAB. If you want to learn about NumPy arrays, the NumPy Tutorial is recommended. For more detailed information, see the NumPy User Guide and the NumPy Reference Manual. If MATLAB is one of your native languages, check out NumPy for MATLAB Users.
Matplotlib¶
Plotting in Nexus is currently handled by Matplotlib. If you want to learn more about plotting with Matplotlib, the Pyplot Tutorial is a good place to start. More detailed information is in the User’s Guide. Sometimes Examples provide the fastest way to learn.
Scipy and H5Py¶
Nexus also occasionally uses functionality from SciPy and H5Py. Learning more about them is unlikely to help you interact with Nexus. However, they are quite valuable on their own. SciPy provides access to special functions, numerical integration, optimization, interpolation, fourier transforms, eigenvalue solvers, and statistical analysis. To get an overview, try the SciPy Tutorial. More detailed material is found in the Scipy Reference. H5Py provides a NumPy-like interface to HDF5 data files, which QMCPACK creates. To learn more about interacting with HDF5 files through H5Py, try the Quick Start Guide. For more information, see the General Documentation.
Quantum Monte Carlo: Theory and Practice¶
Currently, review articles may be the best way to get an overview of Quantum Monte Carlo methods and practices. The review article by Foulkes, et al. from 2001 remains quite relevant and is lucidly written. Other review articles also provide a broader perspective on QMC, including more recent developments. Another resource that can be useful for newcomers (and needs to be updated) is the QMC Wiki. If you are aware of resources that fill a gap in the information presented here (almost a certainty), please contact the developer at [email protected] to add your contribution. | https://nexus-workflows.readthedocs.io/en/latest/reading.html | 2020-09-18T22:26:42 | CC-MAIN-2020-40 | 1600400189264.5 | [] | nexus-workflows.readthedocs.io |
The core of multi-tenancy is to allocate the authority relationship between different users and resources. For the container management platform, the main resources are computing resources, storage resources and network resources, which are also the key object resources of KubeSphere multi-tenany.
In the KubeSphere multi-tenancy system, resources are divided into three levels:
Resources at different levels can be flexibly customized to divide users' permission scope, which is used to achieve resource isolation between different users.
Common permission management models include ACL, DAC, MAC, RBAC and ABAC. In KubeSphere, we make use of the RBAC authority management model to control users' authority. Users don't need to directly associate with resources, but carry out authority control through role definition.
Cluster
Clustering refers to the current Kubernetes cluster, which provides computing, storage, and network resources for tenants. workspaces can be created under a cluster.
Workspaces
Under a cluster, you can create workspaces to manage different projects in groups. Projects and DevOps projects can be created in workspaces.
Projects and DevOps projects
Projects, DevOps projects are the minimum level of version permission management, consuming the resources of the cluster to deploy and build applications.
Cluster permission control
Cluster roles define user control over cluster resources, such as nodes, monitoring, accounts, and so on.
Workspaces permission control
The workspaces role defines the user's control authority over projects and projects in the workspaces and the management authority of workspaces members.
Project and project permission control
Creators of projects and projects can share their projects with other users by inviting members, giving different members different roles and differentiating permissions.
In familiar and understand the resource hierarchy, permissions management way, to taichung every level administrators and ordinary users, understand the meaning of each grade of concrete members and roles, how to better management platform, the role of members and is the key links of actual use, please continue to refer to the role authorization overview. | https://v1-0.docs.kubesphere.io/docs/multi-tenant/intro/ | 2020-09-18T23:20:40 | CC-MAIN-2020-40 | 1600400189264.5 | [] | v1-0.docs.kubesphere.io |
How to export tickets?
You can export all the tickets that are on your helpdesk into a CSV.
Each row of the file will contain information about the ticket, including the tags, the related satisfaction survey and other useful information.
Here is the full list of fields that are currently included:
- Ticket id
- Initial channel
- Last used integration name
- Last used integration type
- Created by an agent
- Subject
- Creation date
- Closed date
- Survey score
- Survey replied date
- Assignee name
- Assignee email
- Customer email
- Customer name
- Customer last Shopify order
- First response time
- Resolution time
- Number of agent messages
- Number of customer messages
Each export can contain up to 1 million tickets. If you attempt to run an export on a view that contains more tickets, it will automatically stop at that limit.
We will sort the tickets in the same order as the view from which you started it.
Once the export is ready, you will receive an email containing a link to the generated file. The CSV will be downloadable during the next 14 days.
Our system usually takes 1 hour to export up to 100 000 tickets. So for 1 Million tickets, the export can last up to 10 hours.
If you have any more question about this feature, please contact us at [email protected] | https://docs.gorgias.com/faq/how-to-export-tickets | 2020-09-18T23:42:05 | CC-MAIN-2020-40 | 1600400189264.5 | [array(['https://files.helpdocs.io/eQ5bRgGfSN/articles/vpr5eccck4/1579291021932/export.gif',
None], dtype=object) ] | docs.gorgias.com |
Patching Presentation video is now available for download
We got the video of my patching presentation posted from the SharePoint 2008 conference. There truely is a lot of great information here, and this download will be referenced from many Microsoft locations as a must see when it comes to patching SharePoint.
Here are the links:
News Story -
Download Link -
Enjoy! | https://docs.microsoft.com/en-us/archive/blogs/dwinter/patching-presentation-video-is-now-available-for-download | 2020-09-19T01:12:18 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.microsoft.com |
Install on Solaris
You can install Splunk Enterprise on Solaris with a PKG packages, or a tar file.
To install the Splunk universal forwarder, see Install a *nix universal forwarder in the Universal Forwarder manual. The universal forwarder is a separate executable, with its own set of installation procedures.
Upgrading?
If you are upgrading, see How to upgrade Splunk for instructions and migration considerations before proceeding.
Install Splunk
Splunk Enterprise.
Caveat for software package validation when using PKG files for installation
After installation, software package validation commands (such as
pkg verify might fail because of intermediate files that get deleted during the installation process. To verify your Splunk installation package, use the
splunk validate files CLI command instead.
tar file install
The tar file is a manual form of installation. a Solaris system, expand the tar file
What gets installed
To learn more about the Splunk Enterprise package and what it installs, run the following command:
pkginfo -l splunk
To list all packages that have been installed on the host:
pkginfo! | https://docs.splunk.com/Documentation/Splunk/6.6.12/Installation/InstallonSolaris | 2020-09-19T00:45:19 | CC-MAIN-2020-40 | 1600400189264.5 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Error Status
Status is an arbitrary label you can apply to groups of errors based on a property value. For example, when you want to work on an error, you can set the status
Assigned to Jane to all the errors with that message. You do this with the “Set Status” action in any of the UI pages. You can filter the UI to show you errors with your assignment status values using the Filter Toolbar.
There are several default status values already defined in your TrackJS account, and you can add as many additional statuses as are valuable for your team.
Status is applied to Errors asynchronously, and it might take up to 60 seconds for all of the errors matching your request to be tagged. Setting the status will set the status on all future errors matching your request as well.
When To Use Status
Status has many different use cases:
- setting the version the error will be resolved
- setting who is responsible for an error
- setting the priority or impact of an error
- setting the root cause of an error
Most TrackJS users will use a combination of the above based on their team workflow. For example, errors could be categorized on impact into
P1,
P2, and
P3 statuses. As developers work on errors, they change the status to
Assigned to Jane when it looks important. Jane could then change the status to
Fixed in 3.2.1 or
User Extension based on her findings.
Special Status Behavior
Errors tagged
Informational will be excluded from your page quality reports, such as the weekly summary email. This status indicates that this event is not impactful to your users, but you still want to know when it occurs. This could be something like an unexpected action or an important usage event. | https://docs.trackjs.com/data-management/status/ | 2020-09-19T00:05:42 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.trackjs.com |
The.
Default URL is set to in development environment. 3.3.0
If you want to build for your production environment:
_config.ymle.g.
url:.
JEKYLL_ENV=production bundle exec jekyll build.
Changes to
_config.ymlare not included during automatic regeneration.
The
_config.ymlmaster configuration file contains global configurations and variable definitions that are read once at execution time. Changes made to
_config.ymlduring automatic regeneration are not loaded until the next execution.
Note Data Files are included and reloaded during automatic regeneration.
Destination folders are cleaned on site builds.
Jekyll also comes with a built-in development server that will allow you to preview what the generated site will look like in your browser locally.
jekyll serve # => A development server will run at # Auto-regeneration: enabled. Use `--no-watch` to disable. jekyll serve --livereload # LiveReload refreshes your browser after a change. jekyll serve --incremental # Incremental will perform a partial build in order to reduce regeneration time. jekyll serve --detach # => Same as `jekyll serve` but will detach from the current terminal. # If you need to kill the server, you can `kill -9 1234` where "1234" is the PID. # If you cannot find the PID, then do, `ps aux | grep jekyll` and kill the instance.
jekyll serve --no-watch # => Same as `jekyll serve` but will not watch for changes.
These are just a few of the available configuration options. Many configuration options can either be specified as flags on the command line, or alternatively (and more commonly) they can be specified in a
_config.yml file at the root of the source directory. Jekyll will automatically use the options from this file when run. For example, if you place the following lines in your
_config.yml file:
source: _source destination: _deploy
Then the following two commands will be equivalent:
jekyll build jekyll build --source _source --destination _deploy
For more about the possible configuration options, see the configuration page.
Call for help
The
helpcommand is always here to remind you of all available options and usage, and also works with the
build,
serveand
newsubcommands, e.g
jekyll help newor
jekyll help build.
If you’re interested in browsing these docs on-the-go, install the
jekyll-docs gem and run
jekyll docs in your terminal.
© 2008–2018 Tom Preston-Werner and Jekyll contributors
Licensed under the MIT license. | https://docs.w3cub.com/jekyll/usage/ | 2020-09-18T23:06:48 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.w3cub.com |
Terminal Server registry settings for applications
This article discusses the registry settings that can be used to modify application behavior on a Terminal Server computer.
Original product version: Windows Server 2012 R2
Original KB number: 186499
More information
For more information on MSI (Microsoft Windows Installer) Behavior based on Terminal Server Versions 2003 and later, go to KB 2002357.
Controlling Application Execution in Execute Mode
Several compatibility bits can be set for an application, registry path, or .ini file to change how a Terminal Server computer handles the merging of application initialization data when a session is in execute mode. These compatibility bits are set in the registry under the following subkey: isn't specified and multiple applications may use the same file name (for example, Setup.exe and Install.exe are now regularly used for installation programs), specify the application type to help make sure that the compatibility settings don.
Locate).
Applications
The following compatibility bits affect the application when it's running. They're located in the following registry subkey (where.
.Ini Files
The following compatibility bits control .ini file propagation. They're located in the following registry subkey (where doesn't delete any existing data in the user's .ini file. If this bit isn't set, it overwrites the user's .ini file if it isn't set, it replaces all paths to the Windows directory with the path to the user's Windows directory.
Registry Paths
The following compatibility bits control registry propagation. They're located in the following registry subkey (where aren't added to the user's registry. Additionally, the system doesn't delete any existing data in the user's registry. If this bit isn't set, the system deletes and overwrites the user's registry data if the data is older than the system master registry data. If the bit isn't set, the system also adds any new keys not in the user's registry.
For additional information, click the following article number to view the article in the Microsoft Knowledge Base:
186514 Terminal Server does not support sentinel devices | https://docs.microsoft.com/en-us/troubleshoot/windows-server/remote/registry-settings-change-apps-terminal-server | 2020-09-18T23:26:14 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.microsoft.com |
Enterprise application Repository in Workspace ONE UEM console allows you to easily set up common applications. You can add internal applications on your network with an external application repository and manage the applications with Workspace ONE UEM. Once the applications are added to the application Repository, they can then be distributed and installed on devices.
Complete the following steps to add an application using Enterprise Application Repository.
Procedure
- Navigate to .
- Search and select the internal application.
- Select Next.
- Edit the Application Name or Managed By if necessary, and select Next.You can review the summary before adding the application. You can edit some fields later through the application list view.
- Select Save. | https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/services/Software_Distribution/GUID-FB0F7A34-BF34-469B-9682-256CFEC9571B.html | 2020-09-19T00:51:18 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.vmware.com |
The Inflection API
The
Inflection class converts words from their base form to a user-specified. inflection type. The class aggregates dictionary based lookup and rule based inflections, including the nerual-network models used to select the appropriate rules. It is implemented as a singleton that is instantiated for the first time when you call any of its methods from
lemminflect.
Only the base form of a word can be inflected and the library methods here expect the incoming word to be a lemma. If your word is not in its base form, first call the lemmatizer to get the base form. When using the spaCy extension, lemmatization is handled internally.
Examples
Usage as a library
> from lemminflect import getInflection, getAllInflections, getAllInflectionsOOV > getInflection('watch', tag='VBD') ('watched',) > getAllInflections('watch') {'NN': ('watch',), 'NNS': ('watches', 'watch'), 'VB': ('watch',), 'VBD': ('watched',), 'VBG': ('watching',), 'VBZ': ('watches',), 'VBP': ('watch',)} > getAllInflections('watch', upos='VERB') {'VB': ('watch',), 'VBP': ('watch',), 'VBD': ('watched',), 'VBG': ('watching',), 'VBZ': ('watches',)} > getAllInflectionsOOV('xxwatch', upos='NOUN') {'NN': ('xxwatch',), 'NNS': ('xxwatches',)}
Usage as a extension to spaCy
> import spacy > import lemminflect > nlp = spacy.load('en_core_web_sm') > doc = nlp('I am testing this example.') > doc[4]._.inflect('NNS') examples
Methods
getInflection
getInflection(lemma, tag, inflect_oov=True)
The method returns the inflection for the given lemma based on te PennTreebank tag. It first calls
getAllInflections and if none were found, calls
getAllInflectionsOOV. The flag allows the user to disable the rules based inflections. The return from the method is a tuple of different spellings for the inflection.
Arguments
- lemma: the word to inflect
- tag: the Penn-Treebank tag
- inflect_oov: if
Falsethe rules sytem will not be used.
getAllInflections
getAllInflections(lemma, upos=None)
This method does a dictionary lookup of the word and returns all lemmas. Optionally, the
upos tag may be used to limit the returned values to a specific part-of-speech. The return value is a dictionary where the key is the Penn Treebank tag and the value is a tuple of spellings for the inflection.
Arguments
- lemma: the word to inflect
- upos: Universal Dependencies part of speech tag the returned values are limited to
getAllInflectionsOOV
getAllInflectionsOOV(lemma, upos)
Similary to
getAllInflections, but uses the rules system to inflect words.
Arguments
- lemma: the word to inflect
- upos: Universal Dependencies part of speech tag the returned values are limited to
Spacy Extension
Token._.inflect(tag, form_num=0, inflect_oov=True, on_empty_ret_word=True)
The extension is setup in spaCy automatically when
lemminflect is imported. The above function defines the method added to
Token. Internally spaCy passes token information to a method in
Inflections which first lemmatizes the word. It then calls
getInflection and then returns the specified form number (ie.. the first spelling).
Arguments
- form_num: When multiple spellings exist, this determines which is returned
- inflect_oov: When
False, only the dictionary will be used, not the OOV/rules system
- on_empty_ret_word: If no result is found, return the original word
setUseInternalLemmatizer
setUseInternalLemmatizer(TF=True)
To inflect a word, it must first be lemmatized. To do this the spaCy extension calls the lemmatizer. Either the internal lemmatizer or spaCy's can be used. This function only impacts the behavior of the extension. No lemmatization is performed in the library methods.
Arguments
- TF: If
True, use the LemmInflect lemmatizer, otherwise use spaCy's | https://lemminflect.readthedocs.io/en/stable/inflections/ | 2020-09-18T22:24:46 | CC-MAIN-2020-40 | 1600400189264.5 | [] | lemminflect.readthedocs.io |
Create a cluster from a definition on Azure
You can quickly create clusters from default or custom cluster definitions within an existing Azure environment.
To create a Data Hub cluster on Azure, you must have an existing Azure environment. Also, you should make sure that the Runtime version of the Data Lake cluster matches the Runtime version of the Data Hub cluster that you are about to create; If these versions don't match, you may encounter warnings and/or errors.
-.
- Once done,. | https://docs.cloudera.com/data-hub/cloud/create-cluster-azure/topics/mc-create-cluster-from-template.html | 2020-09-19T00:08:27 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.cloudera.com |
cmake_find_package_multi¶
Warning
This is an experimental feature subject to breaking changes in future releases.
This generator is similar to the cmake_find_package generator but it allows working with
multi-configuration projects like
Visual Studio with both
Debug and
Release. But there are some differences:
- Only works with CMake > 3.0
- It doesn’t generate
FindXXX.cmakemodules but
XXXConfig.cmakefiles.
- The “global” approach is not supported, only “modern” CMake by using targets.
Usage¶
$ conan install . -g cmake_find_package_multi -s build_type=Debug $ conan install . -g cmake_find_package_multi -s build_type=Release
These commands will generate several files for each dependency in your graph, including a
XXXConfig.cmake that can be located
by the CMake find_package(XXX CONFIG) command, with XXX as the package name.
Important
Add the
CONFIG option to
find_package so that module mode is explicitly skipped by CMake. This helps to
solve issues when there is for example a
FindXXXX.cmake file in CMake’s default modules directory that could be loaded instead of the
XXXXConfig.cmake generated by Conan.
The name of the files follows the pattern
<package_name>Config.cmake. So for the
zlib/1.2.11 package,
a
zlibConfig.cmake file will be generated.
See also
Check the section cmake_find_package_multi to read more about this generator and the adjusted CMake variables/targets. | https://docs.conan.io/en/1.22/integrations/build_system/cmake/cmake_find_package_multi_generator.html | 2020-09-19T00:24:31 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.conan.io |
Cloud Files concepts#
Cloud Files is not a file system in the traditional sense. You cannot map or mount virtual disk drives like you can with other forms of storage such as a SAN or NAS. Because Cloud Files is a different kind of storage system, you should review the following key terms and concepts.
Accounts#
The Cloud Files system is designed to be used by many different customers. Your user account is your portion of the Cloud Files system. You must identify yourself with your Rackspace Cloud user name and API access key. After you are authenticated, you have full read/write access to the files stored under your account. To obtain a Cloud Files account and enable your API access key, go to.
Authentication#
The Identity Guide describes how to authenticate against the Identity service to receive Cloud Files connection parameters and an authentication token. The token must be passed to Cloud Files operations during the time that it is valid.
For more information about authentication, see the Identity API Reference documentation.
Note
The language-specific APIs handle authentication, token passing, and HTTPS request/response communication.
Permissions#
In Cloud Files, you have your own storage account and full access to that account. You must authenticate with your credentials as described in the section on authentication. After you are authenticated, you can perform all Cloud Files operations within that account.
You can use Role Based Access Control (RBAC) with Cloud Files. For more information, see Role-based access control (RBAC).
Containers#
A container is a storage compartment that provides a way for you to organize your data. You can think of a container like a folder in Windows® or a directory in UNIX®. The primary difference between a container and these other file system concepts is that containers cannot be nested. You can have up to 500,000 containers in your account. Data must be stored in a container, so you must have at least one container defined in your account before you upload data.
If you expect to write more than 100 objects per second to a single container, we recommend organizing those objects across multiple containers to improve performance.
The only restrictions on container names is that they cannot contain a forward slash (/) and must be less than 256 bytes in length. Note that the length restriction applies to the name after it has been URL-encoded. For example, a container name of Course Docs would be URL-encoded as Course%20Docs and is therefore 13 bytes in length rather than the expected 11.
You can create a container in any Rackspace data center. (See Service access endpoints for a list.) However, in order to lower your costs, you should create your most served containers in the same data center as your server. Otherwise, you will be billed for external bandwidth charges. Note that this is true when computations are performed on objects but is not true for static content served to end users directly.
In addition to containing objects, you can also use the container to control access to objects by using an access control list (ACL). For more information, see Container access control lists. You cannot store an ACL with individual objects.
Objects#
Objects are the basic storage entities in Cloud Files. They represent
the files and their optional metadata that you upload to the system.
When you upload objects to Cloud Files, the data is stored as-is
(without compression or encryption) and consists of a location
(container), the object's name, and any metadata that you assign,
consisting of key/value pairs. For example, you can choose to store a
backup of your digital photos and organize them into albums. In this
case, each object could be tagged with metadata such as
Album : Caribbean Cruise or
Album : Aspen Ski Trip.
The only restriction on object names is that they must be less than 1024
bytes in length after URL-encoding. For example, an object name of
C++final(v2).txt would be URL-encoded as
C%2B%2Bfinal%28v2%29.txt and therefore is 24 bytes in length rather
than the expected 16.
Cloud Files limits the size of a single uploaded object. By default this limit is 5 GB. However, the download size of a single object is virtually unlimited with the use of segmentation. Segments of the larger object are uploaded and a special manifest file is created that, when downloaded, sends all the segments concatenated as a single object. Segmentation also offers much greater upload speed with the possibility of parallel uploads of the segments.
For metadata, do not exceed 90 individual key/value pairs for any one object and do not exceed 4 KB (4096 bytes) for the total byte length of all key/value pairs.
Operations#
Operations are the actions you perform within your account, such as creating or deleting containers or uploading or downloading objects. For information about the Cloud Files API operations, see Storage API reference and CDN API reference.
You can perform operations through the REST web service API or a language-specific API. For information about the Rackspace language-specific APIs, see SDKs and tools.
Note
All operations must include a valid authorization token.
CDN-enabled containers#
CDN-enabled containers serve content through the Akamai content delivery network (CDN). CDN-enabled containers are publicly accessible for read access, so they do not require an authorization token for read access. However, uploading content into a CDN-enabled container is a secure operation and requires a valid authentication token.
Each CDN-enabled container has a unique URI that can be combined with its object names and openly distributed in web pages, emails, or other applications.
For example, a CDN-enabled container named
photos might be
referenced as.
If that container houses ashot called
wow1.jpg, that image
can be served by a CDN with the full URL of.
This URI can be embedded in items like HTML pages, email messages, or
blog posts. The first time that the URI is accessed, a copy of that
image is fetched from the Cloud Files storage system. The copy is cached
in a CDN and served from there for all subsequent requests for a
configurable cache time to live (TTL) value. Setting the TTL of a
CDN-enabled container translates to setting the
Expires and
Cache-Control HTTP headers. Note that extremely long TTL values do
not guarantee that an object is served from a CDN edge location. When
the TTL expires, the CDN checks Cloud Files to ensure that it has the
most up-to-date content. A purge request forces the CDN to check with
Cloud Files for the most up-to-date version of the file.
Cloud Files continues to serve content through the CDN until it receives a delete request.
Containers tracked in the CDN management service are completely separate and distinct from the containers defined in the storage service. It is possible for a container to be CDN-enabled even if it does not exist in the storage system. You might want the ability to pre-generate CDN URLs before actually uploading content, and this separation gives you that ability.
However, for the content to be served from the CDN, the container names
must match in both the CDN management service and the storage
service. For example, you could CDN-enable a container called
images
and be assigned the CDN URI, but you also need to create a container
called
images in the storage service.
For more information about CDN-enabled containers and operations for them, see the CDN API reference. | https://docs.rackspace.com/docs/cloud-files/v1/getting-started/concepts/ | 2020-09-19T00:38:24 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.rackspace.com |
Create a property in Explore with the query palette
The query palette is a quick way to create new properties and flows in your queries. You can create an object inline while you are constructing a query, rather than navigating away to create the property in the Data section of the UI.
Access the palette at the top of the query builder.
Tips for using the query palette:
To clear the query palette, delete objects by clicking on the delete icon (when hovering). The palette is also cleared when you start a new session, navigate away from the query builder, or refresh your page. To find past queries, use History (at the bottom left of the UI) or hit your browser Back button.
Objects in a query's palette are linked to the query and saved only as part of the query. To revisit or reuse your objects, pin your query to a board or use the query ID (that is, the query URL).
- In Explore, a green dot next to the button that opens the query palette marks a query that uses properties or flows from a palette.
- A red dot in the query palette marks a property or flow that is not valid. | https://docs.interana.com/User's_Guide/Enrich_your_data_with_properties/Create_a_property_in_Explore_with_the_query_palette | 2020-09-19T00:07:27 | CC-MAIN-2020-40 | 1600400189264.5 | [array(['https://docs.interana.com/@api/deki/files/3582/locate_query_palette.png?revision=1&size=bestfit&width=429&height=228',
'locate_query_palette.png'], dtype=object) ] | docs.interana.com |
The responsibilities of Services, Managers, and Drivers, can be a bit confusing to people that are new to manila..
manila.serviceModule¶
Generic Node base class for all workers that run on hosts.
Service(host, binary, topic, manager, report_interval=None, periodic_interval=None, periodic_fuzzy_delay=None, service_name=None, coordination=False, *args, **kwargs)
Bases:
oslo_service.service.Service
Service object for binaries running on hosts.
A service takes a manager and enables rpc by listening to queues based on topic. It also periodically runs tasks on the manager and reports it state to the database services table.
create(host=None, binary=None, topic=None, manager=None, report_interval=None, periodic_interval=None, periodic_fuzzy_delay=None, service_name=None, coordination=False)
Instantiates class and passes back application object.
kill()
Destroy the service object in the datastore.
periodic_tasks(raise_on_error=False)
Tasks to be run at a periodic interval.
report_state()
Update the state of this service in the datastore.
start()
stop()
wait()
WSGIService(name, loader=None)
Bases:
oslo_service.service.ServiceBase
Provides ability to launch API from a ‘paste’ configuration.
reset()
Reset server greenpool size to default.
start()
Start serving this service using loaded configuration.
Also, retrieve updated port number in case ‘0’ was passed in, which indicates a random port should be used.
stop()
Stop serving this API.
wait()
Wait for the service to stop serving this API.
process_launcher()
serve(server, workers=None)
wait()
manila)
Bases:
manila.db.base.Base,
manila.manager.PeriodicTasks
RPC_API_VERSION
Redefine this in child classes.
init_host()
Handle initialization if this is a standalone service.
Child classes should override this method.
periodic_tasks(context, raise_on_error=False)
Tasks to be run at a periodic interval.
service_config(context)
service_version(context)
target
This property is used by oslo_messaging.
PeriodicTasks
Bases:
oslo_service.periodic_task.PeriodicTasks
SchedulerDependentManager(host=None, db_driver=None, service_name='undefined')
Bases:
manila.manager.Manager
Periodically send capability updates to the Scheduler services.
Services that need to update the Scheduler of their capabilities should derive from this class. Otherwise they can derive from manager.Manager directly. Updates are only sent after update_service_capabilities is called with non-None values.
update_service_capabilities(capabilities)
Remember these capabilities to send on next periodic update.
A manager will generally load a driver for some of its tasks. The driver is responsible for specific implementation details. Anything running shell commands on a host, or dealing with other non-python code should probably be happening in a driver.
Drivers should minimize touching the database, although it is currently acceptable for implementation specific data. This may be reconsidered at some point.. | https://docs.openstack.org/manila/queens/contributor/services.html | 2020-09-19T00:32:12 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.openstack.org |
get_pkg_data_filename¶
astropy.utils.data.
get_pkg_data_filename(data_name, package=None, show_progress=True, remote_timeout=None)[source]¶
Retrieves a data file from the standard locations for the package and provides a local filename for the data.
This function is similar to
get_pkg_data_fileobjbut returns the file name instead of a readable file-like object. This means that this function must always cache remote files locally, unlike
get_pkg_data_fileobj.
- Parameters
- data_namestr
Name/location of the desired data file. One of the following:
The name of a data file included in the source distribution. The path is relative to the module calling this function. For example, if calling from
astropy.pkname, use
'data/file.dat'to get the file in
astropy/pkgname/data/file.dat. Double-dots can be used to go up a level. In the same example, use
'../data/file.dat'to get
astropy/data/file.dat.
If a matching local file does not exist, the Astropy data server will be queried for the file.
A hash like that produced by
compute_hashcan be requested, prefixed by ‘hash/’ e.g. ‘hash/34c33b3eb0d56eb9462003af249eff28’. The hash will first be searched for locally, and if not found, the Astropy data server will be queried.
- packagestr, optional
If specified, look for a file relative to the given package, rather than the default of looking relative to the calling module’s package.
- show_progressbool, optional
Whether to display a progress bar if the file is downloaded from a remote server. Default is
True.
- remote_timeoutfloat
Timeout for the requests in seconds (default is the configurable
astropy.utils.data.Conf.remote_timeout, which is 3s by default). Set this to zero to prevent any attempt at downloading.
- Returns
- filenamestr
A file path on the local file system corresponding to the data requested in
data_name.
- Raises
- urllib.error.URLError
If a remote file cannot be found.
- OSError
If problems occur writing or reading a local file.
See also
get_pkg_data_contents
returns the contents of a file or url as a bytes object
get_pkg_data_fileobj
returns a file-like object with the data
Examples
This will retrieve the contents of the data file for the
astropy.wcstests:
>>> from astropy.utils.data import get_pkg_data_filename >>> fn = get_pkg_data_filename('data/3d_cd.hdr', ... package='astropy.wcs.tests') >>> with open(fn) as f: ... fcontents = f.read() ...
This retrieves a data file by hash either locally or from the astropy data server:
>>> from astropy.utils.data import get_pkg_data_filename >>> fn = get_pkg_data_filename('hash/34c33b3eb0d56eb9462003af249eff28') >>> with open(fn) as f: ... fcontents = f.read() ... | https://docs.astropy.org/en/latest/api/astropy.utils.data.get_pkg_data_filename.html | 2020-09-18T23:49:04 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.astropy.org |
Migrating Kudu Data to CDP
Learn about how to migrate Kudu data from CDH to CDP.
- Back up all your data in Kudu using the kudu-backup-tools.jar Kudu backup tool.
- Manually apply any custom Kudu configuration in your new cluster that you had in your old cluster.
- Copy your backed up data to the target CDP cluster.
- Restore your backup up Kudu data using the Kudu backup tool. | https://docs.cloudera.com/cdp/latest/data-migration/topics/cdp-data-migration-kudu.html | 2020-09-18T23:57:19 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.cloudera.com |
In this tutorial I'll show you, how to set up the
?mute command, that it works correctly.
It's really easy, so lets go. :D
First, you have to create a new role, or choose a role that already exists. The role shouldn't have the following permission:
Also, the position of the "Mute" role should be above the muteable roles, to make shure that the "Mute" role can overwrite the text-permissions of the roles, which are below the "Mute" role.
After that, define the "Mute" role for LenoxBot with
?muterole rolename
That's it. Now you can use
?mute @User time{d, h, m, s} {reason}
And now: Take away the troublemakers. :) | https://docs.lenoxbot.com/tutorials/how-to-set-up-the-mute-command | 2020-09-18T22:41:10 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.lenoxbot.com |
Azure fundamental concepts
Learn fundamental concepts and terms that are used in Azure, and how the concepts relate to one another.
Azure terminology
It's helpful to know the following definitions as you begin your Azure cloud adoption efforts:
- Resource: An entity that's managed by Azure. Examples include Azure Virtual Machines, virtual networks, and storage accounts.
- Subscription: A logical container for your resources. Each Azure resource is associated with only one subscription. Creating a subscription is the first step in adopting Azure.
- Azure account: The email address that you provide when you create an Azure subscription is the Azure account for the subscription. The party that's associated with the email account is responsible for the monthly costs that are incurred by the resources in the subscription. When you create an Azure account, you provide contact information and billing details, like a credit card. You can use the same Azure account (email address) for multiple subscriptions. Each subscription is associated with only one Azure account.
- Account administrator: The party associated with the email address that's used to create an Azure subscription. The account administrator is responsible for paying for all costs that are incurred by the subscription's resources.
- Azure Active Directory (Azure AD): The Microsoft cloud-based identity and access management service. Azure AD allows your employees to sign in and access resources.
- Azure AD tenant: A dedicated and trusted instance of Azure AD. An Azure AD tenant is automatically created when your organization first signs up for a Microsoft cloud service subscription like Microsoft Azure, Intune, or Microsoft 365. An Azure tenant represents a single organization.
- Azure AD directory: Each Azure AD tenant has a single, dedicated, and trusted directory. The directory includes the tenant's users, groups, and apps. The directory is used to perform identity and access management functions for tenant resources. A directory can be associated with multiple subscriptions, but each subscription is associated with only one directory.
- Resource groups: Logical containers that you use to group related resources in a subscription. Each resource can exist in only one resource group. Resource groups allow for more granular grouping within a subscription, and are commonly used to represent a collection of assets required to support a workload, application, or specific function within a subscription.
- Management groups: Logical containers that you use for one or more subscriptions. You can define a hierarchy of management groups, subscriptions, resource groups, and resources to efficiently manage access, policies, and compliance through inheritance.
- Region: A set of Azure datacenters that are deployed inside a latency-defined perimeter. The datacenters are connected through a dedicated, regional, low-latency network. Most Azure resources run in a specific Azure region.
Azure subscription purposes
An Azure subscription serves several purposes. An Azure subscription is:
- A legal agreement. Each subscription is associated with an Azure offer, such as a free trial or pay-as-you-go. Each offer has a specific rate plan, benefits, and associated terms and conditions. You choose an Azure offer when you create a subscription.
- A payment agreement. When you create a subscription, you provide payment information for that subscription, such as a credit card number. Each month, the costs incurred by the resources deployed to that subscription are calculated and billed via that payment method.
- A boundary of scale. Scale limits are defined for a subscription. The subscription's resources can't exceed the set scale limits. For example, there's a limit on the number of virtual machines that you can create in a single subscription.
- An administrative boundary. A subscription can act as a boundary for administration, security, and policy. Azure also provides other mechanisms to meet these needs, such as management groups, resource groups, and role-based access control.
Azure subscription considerations
When you create an Azure subscription, you make several key choices about the subscription:
- Who is responsible for paying for the subscription? The party associated with the email address that you provide when you create a subscription by default is the subscription's account administrator. The party associated with this email address is responsible for paying for all costs that are incurred by the subscription's resources.
- Which Azure offer am I interested in? Each subscription is associated with a specific Azure offer. You can choose the Azure offer that best meets your needs. For example, if you intend to use a subscription to run nonproduction workloads, you might choose the Pay-As-You-Go Dev/Test offer or the Enterprise Dev/Test offer.
Note
When you sign up for Azure, you might see the phrase create an Azure account. You create an Azure account when you create an Azure subscription and associate the subscription with an email account.
Azure administrative roles
Azure defines three types of roles for administering subscriptions, identities, and resources:
- Classic subscription administrator roles
- Azure role-based access control (RBAC) roles
- Azure Active Directory (Azure AD) administrator roles
The account administrator role for an Azure subscription is assigned to the email account that's used to create the Azure subscription. The account administrator is the billing owner of the subscription. The account administrator can manage subscription administrators via the Azure portal.
By default, the service administrator role for a subscription also is assigned to the email account that's used to create the Azure subscription. The service administrator has permissions to the subscription equivalent to the RBAC-based Owner role. The service administrator also has full access to the Azure portal. The account administrator can change the service administrator to a different email account.
When you create an Azure subscription, you can associate it with an existing Azure AD tenant. Otherwise, a new Azure AD tenant with an associated directory is created. The role of global administrator in the Azure AD directory is assigned to the email account that's used to create the Azure AD subscription.
An email account can be associated with multiple Azure subscriptions. The account administrator can transfer a subscription to another account.
For a detailed description of the roles defined in Azure, see Classic subscription administrator roles, Azure RBAC roles, and Azure AD administrator roles.
Subscriptions and regions
Every Azure resource is logically associated with only one subscription. When you create a resource, you choose which Azure subscription to deploy that resource to. You can move a resource to another subscription later.
While a subscription isn't tied to a specific Azure region, each Azure resource is deployed to only one region. You can have resources in multiple regions that are associated with the same subscription.
Note
Most Azure resources are deployed to a specific region. Certain resource types are considered global resources, such as policies that you set by using the Azure Policy services.
Related resources
The following resources provide detailed information about the concepts discussed in this article:
- How does Azure work?
- Resource access management in Azure
- Azure Resource Manager overview
- Role-based access control (RBAC) for Azure resources
- What is Azure Active Directory?
- Associate or add an Azure subscription to your Azure Active Directory tenant
- Topologies for Azure AD Connect
- Subscriptions, licenses, accounts, and tenants for Microsoft's cloud offerings
Next steps
Now that you understand fundamental Azure concepts, learn how to scale with multiple Azure subscriptions. | https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/considerations/fundamental-concepts | 2020-09-19T00:54:03 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.microsoft.com |
How to fix the WordPress error "Destination Folder Already Exists" ?
The “ destination folder already exists” error is one of the most common problem when administrating a WordPress website. In most of the cases, it happens because the plugin or the theme that you upload is already installed.
It occurs during a theme or plugin installation, and is due to WordPress extracting the plugin or theme’s zip file to a folder with the same name as the file you want to install.
How to fix ?
- Try to deactivate, remove, and re-upload the theme or plugin from your WordPress administration
- If the first step did not work, connect to your server with cPanel or any FTP client, and check if the theme or plugin is already installed. If this is the case, a folder with the theme or plugin name already exists on your server, which will cause the error. Delete the folder. Reinstall the theme or the plugin, either via FTP or cPanel or from your WordPress administration.
Other resources about this issue : | https://docs.presscustomizr.com/article/354-how-to-fix-the-wordpress-error-destination-folder-already-exists | 2020-09-18T22:53:47 | CC-MAIN-2020-40 | 1600400189264.5 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5bf7c34104286304a71c8f94/file-jasa3r4NnV.png',
'Destination Folder Already Exists. WordPress error'], dtype=object) ] | docs.presscustomizr.com |
Experimental:Web Server DAT
Summary[edit]
Web Server DAT runs a web server in TouchDesigner. The Web Server DAT supports handling of HTTP requests and WebSocket connections. How client requests are handled is left up to the user via callbacks.
Currently only Basic authentication (ie. encoded username and password) is supported via a python method. Authentication will be in the HTTP request dictionary under the key 'Authorization'.
Ultimately, security is the complete responsibility of the user. It is up to the user to ensure that HTTP requests are properly authenticated, and any data storing usernames/passwords are encrypted or saved privately..
Callbacks DAT
callbacks - A reference to a DAT with python callbacks. The Web Server DAT relies heavily on the Callbacks DAT, and in fact most functionality passes through the callbacks.
onHTTPRequest - Triggered when the web server receives an HTTP request. The request is a dictionary of HTTP headers. Similarly, response is a dictionary of response data such as status and reason. The response server must be returned from the callback. This response will be sent back to the client.
onWebSocketOpen - Triggered when a WebSocket connection is opened with a client. The client address is passed to the callback.
onWebSocketReceiveText - Triggered when the server's WebSocket connection receives text data from a client. The client that sent the text data is passed through to the callback.
onWebSocketReceiveBinary - Triggered when the server's WebSocket connection receives binary data from a client. The client that sent the binary data is passed through to the callback.
onServerStart - Triggered when the server starts.
onServerStop - Triggered when the server stops.. | https://docs.derivative.ca/index.php?title=Experimental:Web_Server_DAT&oldid=16719 | 2019-10-14T04:52:31 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.derivative.ca |
Add/Remove/Modify users in team spaces
Login as either the space admin or owner, and select the
invite button.
To add users, Fill in the account
role, and click
invite
To modify an existing invitation or member
role, you can select the respective dropdown options.
To remove an existing invitation or member, click on the respective
x button
And confirm the action
| https://docs.uilicious.com/administration/space-administration.html | 2019-10-14T04:30:42 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['../images/adminstration/invite-button.png', 'invite button'],
dtype=object)
array(['../images/adminstration/invite-via-email.png', 'invite form'],
dtype=object)
array(['../images/adminstration/space-edit-role.png', 'role edit'],
dtype=object)
array(['../images/adminstration/space-remove-user.png', 'user removal'],
dtype=object)
array(['../images/adminstration/space-remove-confirm.png',
'user removal confirmation'], dtype=object) ] | docs.uilicious.com |
Follow this procedure if you're currently using SVN MultiSite 4.0. If you are using a different version of SVN MultiSite you should review this instructions in the chapter 5. Upgrading Subversion MultiSite.
There have been changes made in recent versions of SVN MultiSite that prevent older builds from being able to upgrade. Before you continue, check what version of SVN MultiSite you are running:
You're running a build that is up-to-date enough to be able to upgrade directly to version 4.2. Proceed to Upgrade to 4.2.
You're running a build that can't upgrade directly to 4.2. Before you can continue you need to contact WANdisco and get SVN MultiSite 4.0 Build: 3937.
Use the upgrader script that is bundled with MultiSite to complete an upgrade to this up-to-date version:
Follow the procedure Upgrading 4.0 then return to this page. unavailable to users so either complete the upgrade out of development hours and ensure that your developers are aware of the brief outage.
Installing in order to perform an upgrade using SVN MultiSite's backup data?
Don't enable Authz during the installation - wait until the import is completed, then enable Authz from the Subversion Settings screen.
Enabling Authz during the installation can greatly impact the performance of the backup file import. backup. build xxx Rename the
svn-replicator directory to something like
backup-svn-replicator etc. e.g.
mv svn-replicator backup-svn-replicator
This approach lets you quickly restore your working version should anything go wrong with the update - Just restore the name of your backed-up installation and restart.
SVN password file
Authz file
license.keyfile ready
5.1 Decompress the new Subversion MultiSite installation file.
5.3 Copy your license key file from your backup to the new
svn-replicator/config directory.
5.4 Go to
/svn-replicator/bin/ and start the installation with the following command:
perl setup
Respond to the Yes / No prompt about accepting the default Java heap settings.
[root@localhost
5.5 The setup will now start up the browser-based installer. Open your browser and go to address shown at the end of the setup - that's your server's IP and specify the admin console port (6444 by default).
Change the environment variable WD_JVMARGS if you wish to start java differently WARNING: if the host does not meet these specified memory requirements, you will encounter problems starting the WANdisco processes. Continue, Y or N ? [Y] : y May 20, 2013 10:57:37 AM org.nirala.trace.Logger info INFO: Invoked from WANdisco installation at: /home/wandisco/svn-replicator [I] using specified port: 6444 [I] Starting Subversion web installer Point a web browser to to configure the product.
5.6. From the Welcome screen, click Continue.
5.7. Once you've read the WANdisco End User License Agreement, click I Agree to continue with the installation.
5.8. Enter a password for the default Admin Console account, the username for this account is admin. Click Next to continue.
5.9. The next two screens explain how MultiSite will act as a proxy between Subversion clients and the Subversion server. Click Next to continue.
5.10 By default, MultiSite will listen on port 80, while Subversion will listen on port 8080. The benefit of this setup is that Subversion end-user don't need to make any changes.
5.11
You'll confirm the proxy settings. These will be populated with the default settings noted in the previous screens. place. See more about SSL Settings.
Once you've finished making any changes to the proxy settings, click Next to continue.
5.12
Setup will check the Apache config file for any.
5.13 admin console.
The import file that will recover your users and settings will not add your old LDAP authority settings. You must re-enter your LDAP authorities before completing the import.
Follow the instructions in the 3.7 Create LDAP Authority 4 into the utils directory (
<INSTALL-DIR>/svn-replicator/utils/) which contains three conversions scripts. You will need to apply the script that corresponds with the version of SVN MultiSite from which you are upgrading. i.e.
perl convertac40-42.pl backup.xml localOR
perl convertac40-42.pl backup.xml ldap
Running the conversion script will create a new file in the utils folder.
So you backed up your data, installed the latest version MultiSite, if it was neccessary, update-backup.xml or MS425-convert-access-control.xml file, (if you needed to update your import file's format). However, any backup that you have installed will only provide user / group definitions. The following settings need to be manually applied:
8.1 Once the latest of SVN MultiSite has been installed and your data re-imported you should redeploy the other nodes using SVN MultiSite's packager where the installation of the other nodes can be handled from the admin console.
This product is protected by copyright and distributed under licenses restricting copying, distribution and decompilation. | https://docs.wandisco.com/svn/ms/v4.2/upgrade40.html | 2019-10-14T03:00:30 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.wandisco.com |
Introduction¶
SCAPI is an open-source general library tailored for Secure Computation implementations. SCAPI provides a flexible and efficient infrastructure for the implementation of secure computation protocols, that is both easy to use and robust. We hope that SCAPI will help to promote the goal of making secure computation practical.
Why Should I Use SCAPI?¶
- SCAPI provides uniformity. As of today, different research groups are using different implementions. It is hard to compare different results, and implementations carried out by one group cannot be used by others. SCAPI is trying to solve this problem by offering a modular codebase to be used as the standard library for Secure Computation.
- SCAPI is flexible. SCAPI’s lower-level primitives inherit from modular interfaces, so that primitives can be replaced easily. SCAPI leaves the choice of which concrete primitives to actually use to the high-level application calling the protocol. This flexibility can be used to find the most efficient primitives for each specific problem.
- SCAPI is efficient. Most of SCAPI’s low level code is built upon native C/C++ libraries using JNI (the java native interface) in order to run more efficiently. For example, elliptic curve operations in SCAPI are implemented using the extremely efficient Miracl library written in C.
- SCAPI is built to please. SCAPI has been written with the understanding that others will be using it, and so an emphasis has been placed on clean design and coding, documentation, and so on.
Architecture¶
SCAPI is composed of the following three layers:
- Low-level primitives: these are functions that are basic building blocks for cryptographic constructions (e.g., pseudorandom functions, pseudorandom generators, discrete logarithm groups, and hash functions belong to this layer).
- Non-interactive mid-level protocols: these are non-interactive functions that can be applications within themselves in addition to being tools (e.g., encryption and signature schemes belong to this layer).
- Interactive mid-level protocols: these are interactive protocols involving two or more parties; typically, the protocols in this layer are popular building blocks like commitments, zero knowledge and oblivious transfer.
In addition to these three main layers, there is an orthogonal communication layer that is used for setting up communication channels and sending messages.
| https://scapi.readthedocs.io/en/latest/intro.html | 2019-10-14T03:23:43 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['_images/architecture.png', 'SCAPI Architecture'], dtype=object)] | scapi.readthedocs.io |
This video explains the process of installing WPDS plugin into your WP site. WPDS is a wordpress plugin that simplifies the process of searching any domain on the web.
Installation
Following are the steps for installing WPDS:
- In your WP site, go to Plugins and click Add New.
- Click Upload Plugin and select the WPDS plugin file located in any of your directory and click on Install Now
- Click Activate Plugin to activate the WPDS plugin. Thats it, your WPDS plugin is ready to use.
| http://docs.whmpress.com/docs/wpds-wordpress-domain-search/getting-started/wpds-installation/ | 2019-10-14T04:47:58 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['http://docs.whmpress.com/wp-content/uploads/2016/03/add_plugin.jpg',
None], dtype=object)
array(['http://docs.whmpress.com/wp-content/uploads/2016/03/upload_plugin.jpg',
None], dtype=object)
array(['http://docs.whmpress.com/wp-content/uploads/2016/03/install_now.jpg',
'Selecting WPDS Plugin file'], dtype=object)
array(['http://docs.whmpress.com/wp-content/uploads/2016/03/activate_plugin.jpg',
None], dtype=object) ] | docs.whmpress.com |
VF-C Delivery¶
VF-C includes the following components in R1.
VF-C includes several components in ONAP R1.
- Catalog is used to store the package distributed by SDC, it is a runtime catalog.
- Workflow include two micro service, one is workflow manage service and the other is workflow-activiti engine service, this two service will onboard workflow to workflow engine and parse workflow.
- For NS lifecycle manager,it mainly completes the NS lifecycle management,such as NS Instantiation/termination and auto-healing.
- For Resource manager, it will communicate with NS lifecycle manager to update instance data to A&AI.
- In VF-C southbound, it includes Gvnfmdriver and SVNFM driver which will interact with GVNFM and Vendor VNFM respectively to execute VNF lifecycle management,VF-C provides vnfm driver northbound api,then Vendor can implement this APIs to integrate with VF-C.
- For the EMS driver,it can collect VNF lay’s Fcaps data from Vendor EMS, and then translate and put these data to DCAE Vescollector
For the Amsterdam release, VF-C includes the following components:
- NFVO
- vfc-nfvo-lcm
- vfc-nfvo-catalog
- vfc-nfvo-resmgr
- vfc-nfvo-driver-emsdriver
- vfc-nfvo-driver-gvnfm-gvnfmadapter
- vfc-nfvo-driver-gvnfm-jujudriver
- vfc-nfvo-driver-svnfm-ztedriver
- vfc-nfvo-driver-svnfm-huaweidriver
- vfc-nfvo-driver-svnfm-nokiadriver
- vfc-nfvo-driver-sfc-ztesfcdriver
- GVNFM
- vfc-gvnfm-vnflcm
- vfc-gvnfm-vnfmgr
- vfc-gvnfm-vnfres
- Workflow
- workflow-engine-mgr-service
- activiti-extension
VF-C support VolTE use case in R1 and R2, following are the vVoLTE releated Workflow in VF-C.
- VoLTE Use Case Instantiation In VF-C
- VoLTE Use Case Termination in VF-C
- VoLTE Use Case Auto-healing in VF-C | https://docs.onap.org/en/dublin/submodules/vfc/nfvo/lcm.git/docs/platform/delivery.html | 2019-10-14T04:09:45 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.onap.org |
The component
NServiceBus. enables sending monitoring data gathered with
NServiceBus. to an instance:
busConfiguration logically unique and human readable.
A human readable value can be passed in the following example:
const string SERVICE_CONTROL_METRICS_ADDRESS = "particular.monitoring"; var endpointName = "MyEndpoint"; var machineName = $"{Dns.GetHostName()}.{IPGlobalProperties.GetIPGlobalProperties().DomainName}"; var instanceIdentifier = $"{endpointName}@{machineName}"; busConfiguration.SendMetricDataToServiceControl( serviceControlMetricsAddress: SERVICE_CONTROL_METRICS_ADDRESS, instanceId: instanceIdentifier);
InstanceIddoes not require. | https://docs.particular.net/monitoring/metrics/install-plugin?version=metricsservicecontrol_1.2 | 2019-10-14T04:20:44 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.particular.net |
1. Welcome
1.1. Product overview
The Fusion Plugin for Databricks Delta Lake is used with WANdisco Fusion to provide continuous replication from on-premises Hadoop analytics to Spark based cloud analytics with zero downtime and zero data loss. Take advantage of this together with the Fusion Plugin for Live Hive to implement a LiveAnalytics solution.
Use LiveAnalytics to migrate at scale from continuously operating on-premises Hadoop environments to Databricks, or to operate a hybrid analytics solution. As changes occur on-premises, the Fusion Plugin for Databricks Delta Lake keeps your cloud data and metadata consistent with those changes, automating the critical data replication needed for you to adopt a modern, cloud-based analytics platform.
Create Hive tables in Hadoop to make replicas of those tables available in Databricks. Ingest data to Hive tables and access the same information as Delta Lake content in a Databricks environment. Modify Hive schemas and see the same structure reflected in changes to matching Delta Lake tables.
The Fusion Plugin for Databricks Delta Lake works in conjunction with WANdisco Fusion and the Fusion Plugin for Live Hive to deliver WANdisco’s LiveAnalytics solution. LiveAnalytics lets you:
Automate analytics data migration without disrupting your on-premises applications, and without losing data. Replication is continuous, controlled by rules that you define.
Ensure the reliability and consistency of your data in the cloud by supporting unified analytics that spans your on-premises and cloud environments, and
Replicate changes immediately from Hive content and metadata on-premises to equivalent changes in your cloud environment.
1.2. Documentation guide
This guide explains how to install, configure and operate the Fusion Plugin for Databricks Delta Lake, and contains the following sections:
The welcome chapter introduces this user guide and provides help with how to use it.
- Release Notes
Details the latest software release, covering new features, fixes and known issues to be aware of.
- Concepts
Explains how the Plugin for Databricks Delta Lake uses WANdisco’s LiveData platform, and how to use it with Hadoop, Hive, cloud storage systems and Databricks.
- Installation
Covers the steps required to install and set up the Plugin for Databricks Delta Lake into a WANdisco Fusion deployment.
- Operation
How to operate, configure and troubleshoot Plugin for Databricks Delta Lake.
- Reference
Additional Fusion Plugin for Databricks Delta Lake documentation, including documentation for the available REST API.
1.3. How to contact WANdisco
See our online Knowledge base which contains updates and more information.
If you need more help raise a case on our support website.
If you find an error in this documentation or if you think some information needs improving, raise a case on our support website or email [email protected].
2. Release Notes
2.1. WANdisco Fusion 4.0.1 Build 1565
The WANdisco Fusion Plugin for Databricks Delta Lake completes the product offerings in WANdisco’s LiveAnalytics solution for migration of on-premises Hadoop analytic datasets to the cloud.
Use the Fusion Plugin for Databricks Delta Lake with WANdisco Fusion for continuous replication from on-premises Hadoop analytics to Spark-based cloud analytics with zero downtime and zero data loss. Apply it with the Fusion Plugin for Live Hive and LiveMigrator to implement a LiveAnalytics solution.
For the release notes and information on known issues, please visit the Knowledge base - Fusion Plugin for Databricks Delta Lake 4.0.1 Build 1565.
3. Concepts
3.1. Use Cases
Implement a LiveAnalytics solution using the Fusion Plugin for Databricks Delta Lake. Typical use cases for LiveAnalytics include:
- Hadoop and Hive migration to Databricks
Bring your on-premises Apache Hive content to Databricks as Delta Lake tables, so that you can take advantage of the unique capabilities of a unified analytics platform without disrupting business operations.
LiveAnalytics’ continuous, consistent, automated data replication ensures migrated data sets are available for analytical processing immediately. With minimal disruption when migrating between Hadoop and non Hadoop environments, LiveAnalytics provides faster adoption of Machine Learning and AI capabilities in the cloud. LiveAnalytics keeps your data accurate and consistent across all your business application environments, regardless of geographic location, data platform architecture, or cloud storage provider.
- Hybrid Analytics
Take advantage of the unique capabilities that span your on-premises and cloud environments without compromising on data accuracy or availability. Make your on-premises Hive data available for processing at local speed in Databricks, even while it continues to undergo change.
3.2. Related Technology
Familiarize yourself with concepts for the product and environment in which it works here. Understand how to use the Fusion Plugin for Databricks Delta Lake by learning more about these additional items.
- Apache Hive
Hive is a data warehousing technology for Apache Hadoop. It supports applications that want to use data residing in a Hadoop cluster in a structured manner, allowing ad-hoc querying, summarization and other data analysis tasks to be performed using high-level constructs, including Apache Hive SQL queries.
- Databricks
The Databricks Unified Analytics platform accelerates data innovation by unifying data science, engineering and business. It is a cloud-based service available in AWS or Azure, providing a workspace through which users interact with data objects, computational resources, notebooks, libraries, and experiments.
- Hive Metadata
The operation of Hive depends on the definition of metadata that describes the structure of data residing in a Hadoop cluster. Hive organizes its metadata with structure also, including definitions of Databases, Tables, Partitions, and Buckets.
- Delta Lake
Delta Lake is an open source storage layer used by Databricks that provides ACID transactions, scalable metadata handling, and unification of batch and stream data processing.
- Apache Hive Type System
Hive defines primitive and complex data types that can be assigned to data as part of the Hive metadata definitions. These are primitive types such as TINYINT, BIGINT, BOOLEAN, STRING, VARCHAR, TIMESTAMP, etc. and complex types like Structs, Maps, and Arrays.
- Apache Hive Metastore
The Apache Hive Metastore is a stateless service in a Hadoop cluster that presents an interface for applications to access Hive metadata. It can be deployed in a variety of configurations to suit different requirements. In every case, it provides a common interface for applications to use Hive metadata.
The Hive Metastore is usually deployed as a standalone service, exposing an Apache Thrift interface by which client applications interact with it to create, modify, use and delete Hive metadata in the form of databases, tables, etc. It can also be run in embedded mode, where the Metastore implementation is co-located with the application making use of it.
- WANdisco Fusion Live Hive Proxy
The Live Hive Proxy is a WANdisco service that is part of the Fusion Plugin for Live Hive, acting as a proxy for applications that use a standalone Hive Metastore. The service coordinates actions performed against the Metastore with actions in environments (including Databricks Delta Lake) to which associated Hive metadata are replicated.
- Hive Client Applications
Client applications that use Apache Hive interact with the Hive Metastore, either directly (using its Thrift interface), or indirectly via another client application such as Beeline or HiveServer2.
- HiveServer2
HiveServer2 is a service that exposes a JDBC interface for applications that want to use it for accessing Hive. This could include standard analytic tools and visualization technologies, or the Hive-specific CLI called Beeline.
Hive applications determine how to contact the Hive Metastore using the Hadoop configuration property
hive.metastore.uris.
- Hive pattern rules
A simple syntax used by Hive for matching database objects.
3.3. Deployment Architecture
Use the Plugin for Databricks Delta Lake along with the Fusion Plugin for Live Hive to replicate changes made in Apache Hive to Delta Lake tables accessible from a Databricks environment.
A deployment will consist of two Zones:
- Zone 1
This represents the source environment, where your Apache Hive content and metadata reside. Your table content will reside in the cluster storage (typically HDFS), and your Hive metadata are managed by and maintained in a Hive Metastore. An operational deployment of a LiveAnalytics solution will include:
WANdisco Fusion
Fusion Plugin for Live Hive
- Zone 2
This is the target environment, where your Databricks instance is available. Hive content from Zone 1 will be replicated to cloud storage (e.g. Azure Data Lake Storage Gen2) and transformed to the format used by Delta Lake. Metadata changes made to Hive tables in Zone 1 will be replicated to equivalent changes to Databricks Delta Lake tables. An operational deployment of the solution will include:
WANdisco Fusion
Fusion Plugin for Databricks Delta Lake
3.4. Supported Functionality
3.4.1. Initial Data Migration
Migrate Hive metadata and data that existed in a source zone prior to the introduction of the Plugin for Databricks Delta Lake to Databricks Delta Lake tables in the target zone. Take advantage of WANdisco Fusion functionality including LiveMigrator to initiate migration and transition to LiveAnalytics for ongoing replication of your analytic information to Databricks Delta Lake.
3.4.2. Selective Replication
Choose specific content, databases and tables to replicate from Hive to Databricks using a convenient pattern definition to match databases and tables by name.
3.4.3. Hive File Formats
Apache Hive allows different file formats to be used for Hive tables. The Fusion Plugin for Databricks Delta Lake supports replication from Hive tables that use a subset of these data formats. Supported donor table formats for this release are:
- Optimized Row Columnar (ORC)
The Hive ORC file format is used typically in HDP clusters. ORC format can be specified like:
CREATE TABLE … STORED AS ORC
ALTER TABLE … [PARTITION partition_spec] SET FILEFORMAT ORC
SET hive.default.fileformat=ORC
- Parquet
The Hive Parquet file format is an ecosystem wide columnar file format for Hadoop, normally used in CDH clusters. Parquet format is specified in Hive using statements like:
CREATE TABLE … STORED AS PARQUET
ALTER TABLE … [PARTITION partition_spec] SET FILEFORMAT PARQUET
SET hive.default.fileformat=Parquet
- Other Formats
Hive file formats that are not yet supported for source tables include:
Text File
SequenceFile
RCFile
Avro Files
Custom INPUTFORMAT and OUTPUTFORMAT
The Plugin for Databricks Delta Lake determines the format of a source table prior to replication automatically.
3.4.4. WANdisco LiveData Features
WANdisco Fusion provides a LiveData platform that offers continuous replication of changes to Hive content, making it available as Delta Lake tables in Databricks without the need to schedule transfer.
3.4.5. Hadoop and Object Storage
Work across a variety of big data source and target environments, including major Hadoop and object storage technologies.
3.4.6. Broad Hive support
The vast majority of Hive table types can be replicated without change. Take advantage of Hive features such as partitioning, managed tables, optimized table formats, etc. without needing to adjust your migration strategy.
3.4.7. Automation
The WANdisco Fusion platform automatically responds to service and network disruption without the need for administrative interaction.
3.4.8. Selective Replication
Take advantage of powerful tools to select subsets of your Hive datawarehouse for replication to Databricks Delta Lake. Define policies that control which data sets are replicated between environments, and selectively exclude data from migration.
3.4.9. Scale
The LiveAnalytics operates equally effectively whether you have 1 Terabyte or many Petabytes of data. Scale effectively without introducing overhead, and operate your environments as you need to while leveraging the unique capabilities of Databricks against data that was previously locked up in Hadoop.
4. Installation
A full deployment for LiveAnalytics includes a source zone and a target zone, as described in Deployment Architecture. Follow these installation details for the target environment, and refer to the Installation Guide for the Fusion Plugin for Live Hive for details for the source environment.
Installation is a three-step process that includes:
Installing all pre-requisite components
Executing the command-line installer for the Plugin for Databricks Delta Lake
Activating your environment
4.1. Pre-requisites
- Source Zone
-
- Target Zone
An Azure Data Lake Storage Gen2 storage account with access to a file system
-
WANdisco Fusion 2.14 deployed and configured to access the ADLS Gen2 storage account
With those pre-requisites in place, you can install and configure the Plugin for Databricks Delta Lake.
4.1.1. Setup
Begin installation by obtaining the installer from customer.wandisco.com. Your download location will be provided by WANdisco after you purchase the necessary license for access to and use of the software.
4.1.2. System Requirements
Along with the standard product requirements for WANdisco Fusion, you need to ensure that your environment meets the following system requirements:
- Fusion Server Host
While the Plugin for Databricks Delta Lake imposes minimal overheads on the system requirements for the Fusion server, you should confirm the availability of:
An additional 25 GB of disk space available for the mount point to which the Fusion server logs operations using the
/var/log/fusion/server/fusion-server.logfile. The Plugin for Databricks Delta Lake logs additional information that will require extra storage space.
An additional 2 GB of RAM for the operation of the plugin. Additional memory is required for the caching performed by the plugin to maintain information about the databases and tables under replication. If you intend to replicate a large number of unique Hive tables (more than 100), please contact WANdisco support for assistance in meeting system requirements.
- Databricks environment
The Fusion Plugin for Databricks Delta Lake replicates changes made to matching Hive content and metadata on a continuous basis. The rate at which operations are performed against Hive content that is governed by a Hive replication rule contributes to additional load in the Databricks environment as a result. While the Plugin for Databricks Delta Lake manages that load to account for failed Spark job submission as a result, you should monitor the operation of your Databricks environment to ensure that the cluster resources available are sufficient to accommodate this added load.
4.2. Command-line Installation
Install the Fusion Plugin for Databricks Delta Lake to a Fusion server that is configured in a zone associated with ADLS Gen2 storage. You should have previously deployed and configured this Fusion instance with suitable credentials and information, and confirmed that file system replication results in the successful availability of content in that ADLS Gen2 file system.
Prior to installation, ensure that you have a Databricks cluster available and running.
The install process requires that you obtain the following information before
running the
databricks-installer.sh file as root, and provide details to the
installer using the associated configuration controls:
An example install is:
./databricks-installer.sh \ set-account=azurestorageaccountname \ set-container=adls2filesystemname \ set-account-key=kG5m4i2x74QOZiMUMA16d4LP5D4zmMPf90H6iJ1Iub \ set-address=eastus2.azuredatabricks.net \ set-bearer=dapiecd87ed981997a3d6efda572a7ebb348 \ set-cluster=0815-207120-pups123 \ set-jdbc-http-path=sql/protocolv1/o/6971298374654602/0815-207120-pups123 :: :: :: # # ## #### ###### # ##### ##### ##### :::: :::: ::: # # # # ## ## # # # # # # # # # ::::::::::: ::: # # # # # # # # # # # # # # ::::::::::::: ::: # # # # # # # # # # # ##### # # # ::::::::::: ::: # # # # # # # # # # # # # # # :::: :::: ::: ## ## # ## # # # # # # # # # # # :: :: :: # # ## # # # ###### # ##### ##### ##### You are about to install WANdisco Databricks Delta Lake version <version> Do you want to continue with the installation? (Y/n) databricks-fusion-core-plugin-<version>-xxxx.noarch.rpm ... Done databricks-ui-server-<version>-dist.tar.gz ... Done Uploading datatransformer.jar to the dbfs ... {} Installing library ... {} Restarting Spark cluster is required as part of plugin activation Do you wish to restart Spark cluster now? (y/N) y {} All requested components installed. Running additional post install actions... Restarting fusion-server is required as part of plugin activation Do you wish to restart fusion-server now? (y/N) y Stopping WANdisco Fusion Server: fusion-server Stopped WANdisco Fusion Server process 27356 successfully. Starting WANdisco Fusion Server: fusion-server Started WANdisco Fusion Server process successfully.
Complete the installation by following the prompts before you activate the plugin.
4.3. Plugin Activation
The Fusion Plugin for Databricks Delta Lake works when paired with the Fusion Plugin for Live Hive. After installing both plugins, activate your environment for operation by following the activation process for the Fusion Plugin for Live Hive.
4.4. Installer Help
You can provide the
help parameter to the installer package for guidance on
the options available at install time:
# ./databricks-installer.sh help Verifying archive integrity... All good. Uncompressing WANdisco Databricks Delta Lake............. This usage information describes the options of the embedded installer script. Further help, if running directly from the installer is available using '--help'. The following options should be specified without a leading '-' or '--'. Also note that the component installation control option effects are applied in the order provided. General options: help Print this message and exit Component configuration control: set-account= Name of the ADLSv2 storage account set-container= Name of the container in the storage account set-account-key= Storage account access key set-address= Address of the Databticks service set-bearer= Bearer token for the Databricks cluster set-cluster= Databricks cluster ID set-jdbc-http-path= Unique JDBC HTTP path Component installation control: only-fusion-ui-server-plugin Only install the plugin's fusion-ui-server component only-fusion-server-plugin Only install the plugin's fusion-server component only-upload-etl Only upload ETL jar skip-fusion-ui-server-plugin Do not install the plugin's fusion-ui-server component skip-fusion-server-plugin Do not install the plugin's fusion-server component skip-upload-etl Do not upload ETL jar Post Install service restart control: These options if not set will result in questions in interactive script use. restart-fusion-server Request fusion-server restart skip-restart-fusion-server Skip fusion-server restart
4.5. Configuration
Configure the Plugin for Databricks Delta Lake after installation by using the Databricks Configuration section in the Settings tab. Apply modified configuration properties by adjusting their values and clicking the Update button. Your modifications will only take effect after restarting the Fusion server, which can be performed from the Nodes tab.
You can also modify configuration properties by editing them in the
/etc/wandisco/fusion/plugins/databricks/databricks.properties file, then
restarting the Fusion server.
4.6. Validation
Validate your environment is functional by replicating a simple test database, table and content following these steps in the Hadoop zone:
Create a HCFS replication rule
Log in to the Fusion user interface for the zone associated with your Hadoop environment, and create a Replication Rule for the
/apps/hive/warehouse/delta_lake_test.dblocation.
Create a Hive replication rule
Create a Hive Replication Rule for the same database, using:
Create a database
Log in to the
beelinetool as a user with sufficient privileges to create content in the Hive warehouse (e.g. as the
hdfsuser), connect to Hive, and issue this command:
create database delta_lake_test;
Create a table
In the same
beelinesession, create a test table in the database with this command:
create table test_table (int id, string value);
Create content
In the same
beelinesession, generate simple content for the table with this command:
insert into delta_lake_test.test_table values (1, "Hello, world.");
Validate successful replication
In your Databricks environment, you should be able to see that the
delta_lake_testdatabase exists, containing the
test_tableDelta Lake table with content from the insert operation. This can be seen from a notebook with the command:
%sql select * from delta_lake_test.test_table
Remove your test database
Issue these commands in your
beelinesession to remove the test database and content from the Hadoop cluster and from the Databricks environment:
drop table delta_lake_test.test_table; drop database delta_lake_test;
You can then remove the Hive and HCFS replication rules in the Fusion user interface.
5. Operation
Operate the Plugin for Databricks Delta Lake by creating Hive and HCFS replication rules that match the metadata and content of Hive tables and databases to make them available as Delta Lake tables for Databricks. Subsequent changes to Hive content that matches these rules will initiate replication to maintain consistent Databricks databases and tables.
You can then use the full set of Databricks features, with local speed of access, on data that would otherwise be isolated from your cloud environment. Query and analyze your data through collaborative exploration of your largest datasets, building models iteratively, speeding up your data preparation tasks and operating machine learning lifecycles at scale.
While the majority of Hive functionality is supported by the Plugin for Databricks Delta Lake, some aspects of Hive are not yet supported.
- Table formats
The first release of the Plugin for Databricks Delta Lake supports Hive tables in ORC and Parquet format only. Tables using other formats will not replicate to Delta Lake successfully. Additional table formats will be added in future releases of the plugin.
- Bucketed tables
The plugin does not yet provide support for replicating the bucketing properties used by Hive tables to their Delta Lake replicas.
5.1. Administration
Administer the Plugin for Databricks Delta Lake by creating and managing replication rules that govern the Hive information that you want to make available as Delta Lake tables in the Databricks runtime. Replicate Hive metadata that defines databases, tables and partitions, changes to that metadata, and associated Hive content.
5.1.1. Creating Replication Rules
Create replication rules in the zone where your Apache Hive system operates. Browse your Hadoop file system to identify content for replication, and specify patterns to match your Hive content.
- HCFS Replication
Data for the content of your Hive tables resides in the Hadoop-compatible file system used by your cluster. Create HCFS replication rules to make this content available in the cloud storage accessible to your Databricks runtime. You can create an HCFS replication rule at the level of an individual table (i.e. the location in the HCFS file system where the table content is held), or at some parent directory for this content (e.g. for an entire Hive database, or even the entire Hive warehouse.)
Create HCFS replication rules as described in the Create a rule section of the WANdisco Fusion user guide.
- Hive Replication
Metadata for Hive constructs resides in the Hive Metastore. Create Hive replication rules to make this metadata available as equivalent, replica Delta Lake tables in your Databricks environment. Each Hive replication rule uses a pattern to match specific Hive tables and databases.
Create Hive replication rules as described in the Create Hive rule section of the Fusion Plugin for Live Hive user guide.
5.2. Initial Hive Table Migration
If you want to migrate existing Hive content from your Hadoop cluster to Databricks as Delta Lake tables, the LiveAnalytics solution provides a comprehensive approach to data and metadata migration. Use your choice of Make Consistent or Live Migrator functionality to populate initial content. WANdisco’s LiveAnalytics solution takes advantage of the benefits of Live Migrator to ensure that your Hive content are consistent between Hadoop and the cloud storage accessible to Databricks even if they are undergoing change during migration.
Follow these steps to perform initial Hive table migration to Databricks:
Create replication rules for content
Create an HCFS replication rule that matches your Hive content, then create a Hive replication rule to match the metadata that you want to make available.
Use Live Migrator
Use the Live Migrator functionality to migrate Hive table content to cloud storage. You should ensure that your Databricks environment has sufficient permissions to read from the cloud storage endpoint to which Live Migrator is delivering the content.
Migrate Hive Metadata
Use the Plugin for Databricks Delta Lake REST API to initiate a migration of Hive metadata to Delta Lake. Track the corresponding migration task with the REST API to ensure that it completes successfully. To do this:
On the Live Hive cluster, initiate the migration of the Hive schema for the database:
curl -X POST -i 'http://<fusion.hostname>:8082/plugins/hive/migrate?ruleId=<Replication Rule ID>&database=<Database Name>'
On the Databricks Fusion node, initiate the migration of the Hive data into Delta Lake:
curl -X POST -i 'http://<fusion.hostname>:8082/plugins/databricks/migration?dbName=<Database Name>'
5.3. Maintenance
The Plugin for Databricks Delta Lake leverages the WANdisco Fusion platform, and inherits its approach to logging, runtime operation and general maintenance requirements. Please consult the WANdisco Fusion user guide for further information.
5.4. Sizing and performance
Observe the performance of your LiveAnalytics deployment over time to ensure that it meets your requirements. General advice on performance characteristics is given below, but you may want to take advantage of WANdisco support if you have a particularly demanding environment.
A properly sized implementation of the Plugin for Databricks Delta Lake will need to take into account two key constraints, each of which is affected by the operational behavior of the environments:
- Bandwidth vs Rate of Change
The WANdisco Fusion platform is used by the Plugin for Databricks Delta Lake to replicate file system content, including Hive table content. This typically involves transfer over a network that will have an upper bandwidth capacity (WANdisco Fusion can also enforce additional transfer limits).
If the rate of change to your replicated Hive content exceeds the available bandwidth, your replication performance will be affected. If these conditions persist, it will be impossible to maintain a current replica of Hive content. You will need to either increase available bandwidth, or modify application behavior to limit the rate of change to your Hive content.
In some instances, your environment may benefit from WAN optimization technologies, which can increase the effective bandwidth between zones
- Transformation
Following successful replication of Hive content to cloud storage, the Plugin for Databricks Delta Lake performs a data transformation task to convert the original format of the Hive content into equivalent Delta Lake form. This uses a Databricks cluster (that you configure) to perform transformation.
If you find that there are significant delays between the Hive content becoming available in your cloud storage (as configured in the WANdisco Fusion platform), and the same information being queryable from a Databricks notebook or Spark job, you may need to allocate additional resources to the Databricks cluster to enable it to transform your content more readily. If the Databricks cluster configured for this purpose for the plugin is also used for other jobs, you may want to consider employing a cluster dedicated to this transformation so that it is unaffected by other work.
5.5. Troubleshooting
5.5.1. Check the Release notes
Updates on known issues, enhancements and new product releases is made available at the WANdisco community site, including product release notes, known issues, best practices and other information.
5.5.2. Check log Files
The WANdisco Fusion maintains flexible log information that can be configured to expose minimal or detailed information on product operation. Logging levels can be configure dynamically, allowing you to capture detail when required, and minimize overheads when it is not.
Please consult the WANdisco Fusion user guide for details on logging.
6. Reference Guide
6.1. Plugin REST API
Control the Fusion Plugin for Databricks Delta Lake using a REST API that extends the operations available from the Fusion server. Understand the resources and their functionality with the details in this section. Use them to migrate existing Hive content to Delta Lake tables in Databricks and to manage the Spark jobs that the plugin submits during operation.
The Databricks resource is the root resource providing functionality specific
to the Plugin for Databricks Delta Lake. Access the following resources under this context, which is
at the
/plugins/databricks root URI.
6.1.1. Failed Spark Jobs
- Resource
/plugins/databricks/failedSparkJobs
GEToperation
Gets the request IDs of all jobs for the provided database that failed.
Query parameter
database: The name of the Hive database under consideration
Response: A list of the request idenitifiers associated with any failed jobs for the database
POSToperation
Resubmits a previously failed Spark job for execution.
Query parameter
database: The name of the Hive database under consideration
Query parameter
requestId: The request ID for a previously failed job
6.1.2. Failed Spark Job Detail
- Resource
/plugins/databricks/failedSparkJobs/detail
GEToperation
Gets the details for a failed Spark job.
Query parameter
database: The name of the Hive database under consideration
Query parameter
requestId: The request ID for the job
Response: Details of the specified job,
6.1.3. Spark Job Log Management
- Resource
/fusionLog
POSToperation
Cleans the job and commit records associated with successfully completed Spark jobs.
Query parameter
database: The name of the Hive database under consideration
6.1.4. Failed Job Submissions
- Resource
/plugins/databricks/failedSubmissions
GEToperation
Gets a list of details associated with failed job submissions.
Response: A list of details for each failed job submission,
POSToperation
Resubmits a previously failed job for execution, removing it from the list of failed submissions if it was submitted successfully.
Query parameter
requestId: The request ID of the previously failed job sumission
6.1.5. Migration
- Resource
/plugins/databricks/migration
POSToperation
Migrates a Hive database, including all its tables, into a Databricks environment as Delta Lake tables. Use this endpoint to bring pre-existing Hive content into the Databricks environment. You must have previously migrated Hive table content to cloud storage using the Make Consistent feature of the WANdisco Fusion platform, or preferably Live Migrator, which will ensure that the data exist in full in your cloud storage even if they continue to be modified in your Hadoop cluster.
Query parameter
database: The name of the Hive database to migrate
Response: A URI that can be queried for the status of the task associated with the migration.
6.1.6. Task Query
This is a standard WANdisco Fusion resource that is useful when querying the status of long-lived operations such as migration.
- Resource
/fusion/task/<taskid>
GEToperation
Returns details of a specific task.
Response: A JSON structure that includes a variety of key/value pairs representing the status of the task. | https://docs.wandisco.com/bigdata/wdfusion/plugins/databricks-deltalake/4.0/ | 2019-10-14T03:17:51 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['images/LiveAnalyticsOverview.png', 'LiveAnalyticsOverview'],
dtype=object)
array(['images/ProductArchitecture1.png', 'Product Architecture'],
dtype=object)
array(['images/lm1.0_migration2.png',
'LiveMigrator for Initial Migration'], dtype=object)
array(['images/ReplicationRule1.png', 'Replication Rule Hive Selection'],
dtype=object)
array(['images/PluginSettings1.png', 'PluginSettings1'], dtype=object)
array(['images/Operation1.png', 'Operation1'], dtype=object)] | docs.wandisco.com |
Common installation problems¶
This page details problems commonly faced while following the Installing Ubuntu Touch 16.04 images on Halium page.
SSH hangs when trying to connect¶
The SSH connection may hang indefinitely when trying to connect. Attempts to stop the connection with Control-C may or may not return you to a shell prompt. If you run
ssh -vvvv [email protected], you only get the following output before the program stops:
debug1: Connecting to 10.15.19.82 [10.15.19.82] port 22. debug1: Connection established. [...] debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH[...]
This seems to be a common error on arm64 devices with kernel 3.10 when rsyslogd is enabled. If you have this error, please add your voice to ubports/ubuntu-touch#560 and then try the following workaround:
Reboot the device to recovery and connect with
adb shell
Run the following commands to disable rsyslogd:
mkdir /a mount /data/rootfs.img /a echo manual |tee /a/etc/init/rsyslog.override umount /a sync
You may now reboot the device. You should be able to connect to SSH once it comes back online.
Device reboots after a minute¶
The device may reboot cleanly after about a minute of uptime. If you are logged in when the reboot occurs, you will see the following message:
Broadcast message from [email protected] (unknown) at 16:00 ... The system is going down for reboot NOW!
This happens because
lightdm, the Ubuntu Touch display manager, is crashing repeatedly. The system watchdog process sees this and reboots the device.
To fix this problem, log in before the reboot occurs and run the following command:
sudo stop lightdm | http://docs.ubports.com/en/latest/porting/common-problems-install.html | 2019-10-14T02:56:59 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.ubports.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
UpdateMailboxQuota
Updates a user's current mailbox quota for a specified organization and user.
Request Syntax
{ "MailboxQuota":
number, "OrganizationId": "
string", "UserId": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- MailboxQuota
The updated mailbox quota, in MB, for the specified user.
Type: Integer
Valid Range: Minimum value of 1.
Required: Yes
- OrganizationId
The identifier for the organization that contains the user for whom to update the mailbox quota.
Type: String
Pattern:
^m-[0-9a-f]{32}$
Required: Yes
- UserId
The identifer for the user for whom to update the mailbox quota.
Type: String
Length Constraints: Minimum length of 12. Maximum length of 256.: | https://docs.aws.amazon.com/workmail/latest/APIReference/API_UpdateMailboxQuota.html | 2019-10-14T04:25:38 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.aws.amazon.com |
Navigation
Can I jump to a specific date in the calendar?
Yes. Simply add the date to the end of DayBack's URL like this:
Note that the date is in the form YYYY-MM-DD. You can also land on a specific tab of the calendar the same way by adding a value for "view":
Here are the possible values for view, each corresponding to a "tab" in DayBack. "Agenda" views are the "schedule" views in DayBack--the ones with times along the left-hand side. "Basic" views are the "List" views in DayBack:
- basicDay
agendaDay
basicWeek
agendaWeek
month
basicResourceVert
agendaResourceVert
basicResourceHor (the Resource "Grid")
basicHorizon
How can I arrive at a specific event?
If you're in your own solution and want to see a given record in the calendar you can do this by adding the event's ID to DayBack's URL.:
Going further
FileMaker Specific
If you're in FileMaker WebDirect you can have buttons in your solution navigate to the layout containing the DayBack web viewer and the set the web viewer URL to get the same effect. Very cool =)
You can see a video of this in action and download an example file here: Driving DayBack Online with URLs. | https://docs.dayback.com/article/129-navigation | 2019-10-14T04:46:24 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.dayback.com |
Bootstrap Controls for ASP.NET Core are in maintenance mode. We don’t add new controls or develop new functionality for this product line. Our recommendation is to use the ASP.NET Core Controls suite.
The Bootstrap List Box control displays a list of items that can be selected by end-users.
Bootstrap List Box offers the following features:
Multiple Selection
The Bootstrap List Box editor allows multiple list items to be selected at the same time, this functionality is controlled by the BootstrapListBoxBuilder.SelectionMode method. The following selection modes are available within the Bootstrap List BoxstrapListBox object. The BootstrapListBox object serves as a client-side equivalent of the Bootstrap List Box control.
You can modify the editor behavior using the following methods. | https://docs.devexpress.com/ASPNETCoreBootstrap/119672/data-editors/list-box | 2019-10-14T04:34:28 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.devexpress.com |
dhtmlxComboBox is an advanced select box that provides the ability to show suggestions while a user is entering text in the input. The component allows you to set custom filtering rules and specify templates of displaying options in the list. Among other nice features there are tuning of the list of options and Combo Box input, selection of multiple options and data loading on request. Check online samples for dhtmlxComboBox. | https://docs.dhtmlx.com/suite/combo__index.html | 2019-10-14T03:50:51 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.dhtmlx.com |
The sample is described in the following topics:
Introduction
This sample demonstrates how to invoke yahoo weather API and displays results in a Web application.
Prerequisites
- Register and log in to WSO2 App Factory here: ...
-.
The results you get in development environment differ from the same in production environment, because the keys used by the two environments are different. The programmer is generally not aware of the sandbox and production keys. This is handled under the hood. | https://docs.wso2.com/display/AF100/Consuming+APIs+from+Applications | 2019-10-14T03:23:23 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.wso2.com |
Digital Media & Marketing | Continuing Studies » Duke Continuing Studies has partnered with Simplilearn, an online training… with advanced knowledge of the eight most important digital marketing domains. Learnmore.duke.edu
The Top 28 Digital Marketing Certificate Programs to Enroll » You can go for an online digital marketing certificate in the comfort of your home. It is fast and convenient. Renowned universities are offering different courses in ... Searchenginejournal.com
The 20 Best Online Digital Marketing Certificates 2017-2018 - Best… » In this list we have compiled 20 of the best online digital marketing certificates available using a two-fold metric that weighs the value of the program's core ... Bestmarketingdegrees.org
The Top 5 Free Online Courses for Digital Marketers | Inc.com » Mar 22, 2016… Which digital marketing course should you take to boost your marketing skills, absolutely free of charge? Inc.com
The 20 Most Affordable Online Certificates in Digital Marketing - Best… » An online Digital Marketing Certification, on the other hand, is an affordable way to… Each of these affordable programs is accredited and established, high in ... Bestmarketingdegrees.org
Digital Marketing Certificate Programs | eCornell » In a world where more and more activities are centered online, digital marketing is one of the most effective ways to build customer relationships and promote ... Ecornell.com
15 of the best digital marketing certifications you should obtain… » Jan 23, 2018… There are a lot of digital courses and certifications available online today. Many of these courses and certifications are free for digital marketers ... Knowledgeenthusiast.com
Digital Marketing Certificate Online Program | Wharton » …in high gear with an online Digital Marketing Certificate program from Wharton.… marketing programs in the country, the online Digital Marketing Certificate ... Online.wharton.upenn.edu
17 Awesome Free Online Marketing Courses for Digital Marketers » May 24, 2018… Looking for the best free marketing courses to take in 2018? Then look no further. Use the courses on this list to learn everything from SEO to ... Ahrefs.com
Online Digital Marketing Certificate Programs at University of… » The UVM online Digital Marketing Certificate program is an online professional certificate where you'll learn how to implement a full actionable digital marketing ... Learn.uvm.ed. | https://search-docs.net/q:2sWlGp20WgWy20WnGlnpmjXp2gHv3hXa-b5a5IRZlYwbFaVa5INcRZlYRb4cJZJbMT/ | 2019-10-14T04:39:08 | CC-MAIN-2019-43 | 1570986649035.4 | [] | search-docs.net |
scipy.stats.genlogistic¶
scipy.stats.
genlogistic= <scipy.stats._continuous_distns.genlogistic_gen object>[source]¶
A generalized logistic continuous random variable.
As an instance of the
rv_continuousclass,
genlogisticobject inherits from it a collection of generic methods (see below for the full list), and completes them with details specific for this particular distribution.
Notes
The probability density function for
genlogisticis:\[f(x, c) = c \frac{\exp(-x)} {(1 + \exp(-x))^{c+1}}\]
for \(x > 0\), \(c > 0\).
genlogistictakes \(c\) as a shape parameter.
The probability density above is defined in the “standardized” form. To shift and/or scale the distribution use the
locand
scaleparameters. Specifically,
genlogistic.pdf(x, c, loc, scale)is identically equivalent to
genlogistic.pdf(y, c) / scalew
>>> rv = genlogistic(c) >>> ax.plot(x, rv.pdf(x), 'k-', lw=2, label='frozen pdf')
Check accuracy of
cdf | https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genlogistic.html | 2018-01-16T17:18:55 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.scipy.org |
Main Page/More Resources/Row 3 From Joomla! Documentation < Main Page | More Resources Joomla! CMS Version 2.5 - is the current LTS version with active support. For more information see the Joomla 2.5 category page. Joomla! 2.5 Requirements Installation Migrating from Joomla 1.5 to Joomla 2.5 Joomla! Security - Questions about securing your Joomla! web site? Pages all Joomla! Administrators should read. Joomla! Security Checklist Vulnerable Extensions List User and Access Management Admin Password Recovery Retrieved from "" | https://docs.joomla.org/Main_Page/More_Resources/Row_3 | 2018-01-16T17:15:15 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.joomla.org |
What's in the Release Notes?
The release notes include the following topics:
- Key Features
- What's New in Release 2.11?
- Internationalization
- Compatibility Notes
- Resolved Issues
- Known Issues
Key Features
VMware App Volumes is a real-time application delivery system that enables Enterprise IT to instantly deliver applications with complete application lifecycle management. VMware App Volumes provides seamless end-user experience while reducing infrastructure and management costs.
What's New in Release 2.11?
You can now deliver AppStacks and Writable Volumes to Instant Clones and Linked Clones hosted on VMware Horizon View 7.0 and VMware Horizon View 7.0.1 machines.
All communications from the App Volumes agent are now done through TLS 1.1 and TLS 1.2 protocols. There are no configuration changes needed as the agent communicates only through the newer protocols. However, the App Volumes Manager accepts communications through the TLS 1.0 protocol to support connections from earlier agents.
Internationalization
VMware App Volumes product documentation is available in English for release 2.11.
Compatibility Notes
Supported Infrastructure for App Volumes 2.11:
- VMware ESX 5.5.x, 6.0 and vCenter (ESX/vCenter must be the same version)
- VMware Virtual SAN 6.2
- Horizon View 6.0.1, 6.0.2, 6.1, 6.2, 7.0, and 7.0.1
- Citrix XenDesktop 5.5, 5.6, and 7.X
- Citrix XenApp 6.5 and 7.x
VMware App Volumes Manager:
- Microsoft Server 2008 R2 and 2012 R2
- Microsoft SQL Server/Express 2008
- Microsoft SQL Server 2012 SP1, SP2, and SP3 when App Volumes Manager is installed on Microsoft Server 2012 R2
VMware App Volumes Agent:
- Windows 10 (Professional and Enterprise, with Windows 10 November 2015 KB 3163018 OS Build 10586.420 applied)
- Windows 7 SP1 32/64 bit (Professional and Enterprise, with Microsoft Hot fix 3033929 applied)
- Windows 8.1 32/64 bit
- Windows Server 2008 R2 and 2012 R2 for RDSH
- Windows Server 2008 R2 and 2012 R2 for Server VDI
Resolved Issues
The following issues were resolved in this release:
- The Perform Search error appears when you provide unicode characters for the Organizational Unit (OU) name, while configuring Active Directory.
- An event log error appears every 30 seconds each time you attach an AppStack on a Windows 8 or a Windows 10 machine.
- Certain applications might lose network connectivity when delivered in an AppStack if filter drivers are installed on the machine.
For more information, see KB 2145681
- The App Volumes Manager does not delete entries such as datastores and virtual machines from the database after they are removed from the vCenter Server.
As the App Volumes Manager cleans-up machines and storage locations which do not exist on the vCenter Server every 4 and 22 hours respectively, the GUI might not reflect the changes immediately.
- The launch of applications is slow when an AppStack is attached due to slow performance of RegEnumKey operations of large registry keys.
For more information, see KB 2145683
- BSOD occurs when you rename a folder within a special folder in App Volumes.
- BSOD occurs due to stack overflow when additional products with filter drivers are present in App Volumes.
- Microsoft Office might reconfigure when delivered through an AppStack to non-English versions of Windows.
- The allvolattached.bat script fails to run at the time of user log in.
- The refresh operation of an AppStack fails on the App Volumes Manager.
- Writable Volumes fail to attach on a machine when they are located on different vCenter Servers.
- AppStacks cannot be updated when they are located in several storage locations across different vCenter Servers.
- Expanding Writable Volumes fails on the client machine and the size does not update correctly in the App Volumes Manager UI.
- The Windows Start menu fails to launch with AppStacks on Windows 10 machines.
- An antivirus software cannot detect infected files on a Writable Volume.
Known Issues
- AppStacks and Writable Volumes fail to attach on instant clone agents if the Data center name and virtual machine folder locations have non-English characters.
Workaround: Change the Data center name and the virtual machine folder name to English.
- The Windows Search menu fails to launch with AppStacks on Windows 10 machines.
- BSOD occurs on 1 desktop during logon in a pool of 2000 desktops when AppStacks and a Writable Volume are attached.
Workaround: Restart the machine or perform a delete operation to reprovision the virtual machine.
- Some machines appear offline in the App Volumes Manager after performing the Push Image operation on a large pool of instant clone desktops.
Workaround: Restart the machines that appear offline, or restart the App Volumes service on these machines.
- Windows Home Editions are not supported in this release.
Workaround: None.
- An Outlook search might generate a configuration popup when search indexing is enabled.
Workaround: Disable the Windows Search service and searching will work without search indexing.
- Apple iTunes contained in an AppStack must be assigned prior to user login. Dynamic "real-time" assignment for this application is not supported.
Workaround: None.
- Use of Novell products with App Volumes can cause unexpected behavior. Novell products are not supported by VMware App Volumes.
Workaround: None.
- If a user logs in to a virtual desktop with a Writable Volume and a user later logs in without a Writable Volume, a pop-up appears.
Workaround: Refresh or reboot after each logout when multiple users log into the same virtual desktop.
- Microsoft QuickBooks can fail to correctly install if provisioned in the same AppStack with Microsoft Office 2010 or 2013.
Workaround: Provision Microsoft QuickBooks and Microsoft Office 2010, or 2013 in different AppStacks.
- Recommended scale limit is of 1000 concurrent VM connections to a single App Volumes Manager when using multi vCenter mode.
Workaround: None.
- If Adobe Reader AppStack is provisioned in virtual machine with Windows OS installed on a drive other than C, and the AppStack is attached to the a virtual machine with Windows OS installed on C drive, then all PDF files viewed in Windows Explorer will not show the Adobe Reader icon and cannot be opened by simply clicking on the icon. You can still open Adobe Reader.
Workaround: None.
-.
Workaround: None.
- Removing volumes in real-time can result in unexpected Windows behavior.
Workaround: None.
- If an AppStack is created in the background and it fails because an AppStack already exists with the same name on that datastore, the background job will be retried 5 times before being removed.
Workaround: None.
- Renaming a datastore will result in disabled volumes which cannot be deleted using the VMware App Volumes Manager.
Workaround: None.
- Renaming a virtual machine will not be reflected in the VMware App Volumes database.
Workaround: None.
- When a user.
Workaround: Ensure that a virtual machine is rebooted after every user logoff.
- High CPU consumption is observed during provisioning of applications on 64-bit Windows 10 machines.
Workaround: None.
- Once you provision a Horizon View client on a 64-bit agent machine, the subsequent provisioning of Horizon View client on the same machine fails.
Workaround: Use a clean 64-bit agent machine for provisioning the client.
- Storage Group replication fails if the names of the datastores are identical between different vCenter servers.
Workaround: Rename the datastores to ensure that the names are unique.
- Renaming a datastore from a vCenter server that is configured for use in App Volumes Manager, lists both the old and new names of the datastore in the Manager UI.
Workaround: None.
- Creation of AppStacks might fail if you use special characters such as "@:" while naming the AppStacks.
Workaround: None.
- When VHD In-Guest connection mode is selected during the first instance of App Volumes Manager configuration, the Summary tab displays the default storage path appended with a redundant “|” character.
Workaround: None.
- When you use special characters such as “%” or “\” for a storage name, the storage list incorrectly displays “%25” and “%5c” for the names in places such as the Default Storage Location list on the Storage tab.
Workaround: None.
- Unable to install Chrome browser extensions from an AppStack.
Workaround: None.
- The RunDll error appears when you install Adobe Reader 11 on Windows 2012 R2 and 64-bit Windows 8.1 devices.
Workaround: None.
- vCenter configuration on App Volumes Manager using IPv6 address fails.
Workaround: None.
- If multi-vCenter is used and there is a shared datastore between two or more vCenters, that datastore might be displayed more than once.
Workaround: None. | https://docs.vmware.com/en/VMware-App-Volumes/2.11/rn/app-volumes-211-release-notes.html | 2018-01-16T17:14:11 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.vmware.com |
I lose my connection with a Bluetooth enabled car kit
Try the following actions:
- Verify that your car kit is using the latest software version available. For more information about your Bluetooth enabled car kit's software version, see the documentation that came with your car kit.
- Move your BlackBerry smartphone to another location in your vehicle or turn your smartphone to face another direction. The location of your BlackBerry smartphone's Bluetooth antenna in relation to your car kit's Bluetooth antenna may affect the Bluetooth connection.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/41695/1432636.jsp | 2015-05-22T10:31:53 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.blackberry.com |
9.6 Low-Level Distribution Functions
The following functions are provided for users who need lower overhead than that of distribution objects, such as untyped Racket users (currently), and library writers who are implementing their own distribution abstractions.
Because applying these functions is meant to be fast, none of them have optional arguments. In particular, the boolean flags log? and 1-p? are always required.
Every low-level function’s argument list begins with the distribution family parameters. In the case of pdfs and cdfs, these arguments are followed by a domain value and boolean flags. In the case of inverse cdfs, they are followed by a probability argument and boolean flags. For sampling procedures, the distribution family parameters are followed by the requested number of random samples.
Generally, prob is a probability parameter, k is an integer domain value, x is a real domain value, p is the probability argument to an inverse cdf, and n is the number of random samples.
9.6.1 Integer Distribution Functions
(flpoisson-median mean) runs faster than (flpoisson-inv-cdf mean 0.5 #f #f), significantly so when mean is large.
9.6.2 Real Distribution Functions
To get delta-distributed random samples, use (make-flvector n mean). | http://docs.racket-lang.org/math/dist_flonum.html | 2015-05-22T10:00:47 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.racket-lang.org |
Information for "Article/en" Basic information Display titleChunk:Article Default sort keyArticle/en Page length (in bytes)1,608 Page ID309:16, 24 February 2014 Latest editorFuzzyBot (Talk | contribs) Date of latest edit12:55, 23 December 2014 Total number of edits11 Total number of distinct authors) Pages transcluded on (4)Templates used on this page: Article (view source) Article/en (view source) User talk:Ste (view source) Translations:Article/1/en (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Chunk:Article/en&action=info | 2015-05-22T10:11:14 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Difference between revisions of "Components Contact Categories"
From Joomla! Documentation
Revision as of 14:45, 10 January 2011
Contents, there are four drop-down list boxes as shown below:
The selections may be combined. Only items matching all selections will be displayed in the list.
- Select Max Levels.
-.
Categories | https://docs.joomla.org/index.php?title=Help16:Components_Contact_Categories&diff=34215&oldid=34212 | 2015-05-22T10:51:03 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
All public logs
Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 16:24, 1 September 2012 JoomlaWikiBot (Talk | contribs) automatically marked revision 73649 of page Where can you download the template used on the site? patrolled
- 16:24, 1 September 2012 MediaWiki default (Talk | contribs) allowed
- 04:47, 26 October 2008 Chris Davenport (Talk | contribs) marked revision 11353 of page Where can you download the template used on the site? patrolled | https://docs.joomla.org/index.php?title=Special:Log&page=Where+can+you+download+the+template+used+on+the+www.joomla.org+site%3F | 2015-05-22T11:15:32 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Changes related to "How do you enter raw HTML in editors?"
← How do you enter raw HTML in editors?
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&days=7&from=&target=How_do_you_enter_raw_HTML_in_editors%3F | 2015-05-22T10:31:01 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
JLanguage/getLanguagePath
From Joomla! Documentation
< API15:JLanguageRevision as of 13:33, the path to a language
[<! removed edit link to red link >]
<! removed transcluded page call, red link never existed >
Syntax
getLanguagePath($basePath=JPATH_BASE, $language=null)
Returns
string language related path or null
Defined in
libraries/joomla/language/language.php
Importing
jimport( 'joomla.language.language' );
Source Body
function getLanguagePath($basePath = JPATH_BASE, $language = null ) { $dir = $basePath.DS.'language'; if (!empty($language)) { $dir .= DS.$language; } return $dir; }
[<! removed edit link to red link >] <! removed transcluded page call, red link never existed >
Examples
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API15:JLanguage/getLanguagePath&oldid=97906 | 2015-05-22T11:08:18 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Tutorials
From Joomla! Documentation
Revision as of 07:20, 13 September 2012 by Tom Hutchison (Talk | contribs)
This is a category for Joomla! Tutorials.
Subcategories
This category has the following 9 subcategories, out of 9 total.
Pages in category ‘Tutorials’
The following 66 pages are in this category, out of 290 total.(previous 200) (next 200)(previous 200) (next 200) | https://docs.joomla.org/index.php?title=Category:Tutorials&oldid=74306&pagefrom=Rvsjoen%2Ftutorial%2FDeveloping+a+Module%2FPart+05 | 2015-05-22T10:59:15 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Toolbar buttons missing after upgrade
From Joomla! Documentation
Revision as of 09:07, 4 August 2013 by Tom Hutchison (Talk | contribs)
This. | https://docs.joomla.org/index.php?title=J3.3:Toolbar_buttons_missing_after_upgrade&oldid=102036 | 2015-05-22T10:56:39 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Difference between revisions of "Pizza Bugs and Fun 2/Contributors List"
From Joomla! Documentation
< Pizza Bugs and Fun 2
Revision as of 20:50, 18 November 2011
If you made a contribution to the documentation held on this site, please feel free to add your name below. Please consider adding your real name in addition to your nickname. You may, optionally, include a link to your website.
Contributors to the PBF 2011
- Your name here :) | https://docs.joomla.org/index.php?title=Pizza_Bugs_and_Fun_2/Contributors_List&diff=63037&oldid=10340 | 2015-05-22T11:09:53 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Difference between revisions of "JUtility::sendMail"Utility::sendMail
Description
Mail function (uses phpMailer).
Description:JUtility::sendMail [Edit Descripton]
public static function sendMail ( $from $fromname $recipient $subject $body $mode=0 $cc=null $bcc=null $attachment=null $replyto=null $replytoname=null )
See also
JUtility::sendMail source code on BitBucket
Class JUtility
Subpackage Utilities
- Other versions of JUtility::sendMail
SeeAlso:JUtility::sendMail [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JUtility::sendMail&diff=cur&oldid=58017 | 2015-05-22T11:06:50 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Information for "Show a Module on all Menu Items except selected ones" Basic information Display titleShow a Module on all Menu Items except selected ones Default sort keyShow a Module on all Menu Items except selected ones Page length (in bytes)2,420 Page ID3300:49, 4 February 2009 Latest editorGreblys (Talk | contribs) Date of latest edit06:56, 3 July 2012 Total number of edits5 Total number of distinct authors5 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Show_a_Module_on_all_Menu_Items_except_selected_ones&action=info | 2015-05-22T10:55:54 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Administration Guide
Local Navigation
Specifying a BlackBerry MDS Connection Service as a central push server
At least one BlackBerry® MDS Connection Service in your organization's BlackBerry Domain must act as a central push server. Central push servers receive content push requests from server-side applications that are located on an application server or on a web server. Central push servers also manage push requests and send application data and application updates to BlackBerry device applications.
If a BlackBerry Domain includes one BlackBerry MDS Connection Service that is version 5.0 or later, by default, that BlackBerry MDS Connection Service is the central push server. If two BlackBerry MDS Connection Service instances that are version 5.0 or later exist in a BlackBerry Domain, by default, both instances are central push servers. If more than two BlackBerry MDS Connection Service instances (that are version 5.0 or later) exist in a BlackBerry Domain, the first two instances that start are central push servers. You can configure any BlackBerry MDS Connection Service in your organization's BlackBerry Domain to act as a central push server. If a BlackBerry MDS Connection Service in your organization's environment is earlier than version 5.0, it is not designated as a central push server automatically when it starts.
Specify a BlackBerry MDS Connection Service as a central push server
- In the BlackBerry Administration Service, in the Servers and components menu, expand BlackBerry Solution topology > BlackBerry Domain > Component view > MDS Connection Service.
- Click the instance that you want to change.
- Click Edit instance.
- In the General section, in the Is centralized push server drop-down list, click Yes.
- Click Save all.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/25767/Specifying_MDSCS_as_central_push_server_518413_11.jsp | 2015-05-22T10:15:31 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.blackberry.com |
This is the BuildBot documentation for Buildbot version 0.8.7p1.
If you are evaluating Buildbot and would like to get started quickly, start with the Tutorial. Regular users of Buildbot should consult the Manual, and those wishing to modify Buildbot directly will want to be familiar with the Developer's Documentation.
Table Of Contents¶
Buildbot Tutorial¶
Contents:
First quickest way through, from packages of your distribution.)
- buildbot first enter the same sandbox you created before:
cd cd tmp/buildbot source sandbox/bin/activate
Install
A_2<<
Click Force Build - there's no need to fill in any of the fields in this case. Next, click on view in waterfall.
You will now see:
.
This is the BuildBot manual for Buildbot version 0.8.7p1.
Buildbot Manual¶.
Installation.¶..
Concepts¶
This chapter defines some of the basic concepts that the Buildbot uses. You'll need to understand how the Buildbot sees the world to configure it properly..
Changes¶ buildslaves and the integrity of subsequent builds.
Codebase¶
This attribute specifies the codebase to which this change was made. As described above, multiple repositories may contain the same codebase. A change's codebase is usually determined by the bb:cfg. Properties are discussed in detail in the Build Properties section. subclass. used that Periodic build.
- command creates this kind of SourceStamp. If patch is None, the patching step is bypassed.
The buildmaster is responsible for turning the BuildSet into a set of BuildRequest objects and queueing them on the appropriate Builders.
BuildRequests¶
A BuildRequest is a request to build a specific set of source code (specified by one ore more source stamps) on a one SourceStamp specification per codebase..
Builders¶ buildslave-side directory where the actual checkout/compile/test commands are executed).
Build Factories¶
A builder also has a BuildFactory, which is responsible for creating new Build instances: because the Build instance is what actually performs each build, choosing the BuildFactory is the way to specify what happens each time a build is done (Builds).
Build Slaves¶ (see WebStatus and IRC)..
User Objects¶
User Objects allow Buildbot to better manage users throughout its various interactions with users (see Change Sources and Status Targets). attributes take the form Full Name <Email>.
- svn
- who attributes are of the form Username.
- hg
- who attributes are free-form strings, but usually adhere to similar conventions as git attributes (Full Name <Email>).
- cvs
- who attributes are of the form Username.
- darcs
- who attributes contain an Email and may also include a Full Name like git attributes.
- bzr
- who attributes are free-form strings like hg, and can include a Username, Email, and/or Full Name.
Tools¶
For managing users manually, use the buildbot user command, which allows you to add, remove, update, and show various attributes of users in the Buildbot database (see Command-line Tool).
To show all of the users in the database in a more pretty manner, use the users page in the WebStatus. :ref:Email-Addresses.
Another way to utilize User Objects is through UsersAuth for web authentication (see WebStatus). To use UsersAuth, you need to set a bb_username and bb_password via the buildbot user command line tool to check against. The password will be encrypted before storing in the database along with other user attributes. in order for the checkout to be done for each codebase in its own directory.
Warning
Defining a codebaseGenerator that returns non-empty (not '') codebases will change the behavior of all the schedulers..
Global.'] =.
The PBChangeSource is created with the following arguments.
- port
-] some web servers strictly forbid =
Bzr Hook¶.
GitPoller¶
- googlecode_atom import GoogleCodeAtomPoller c['change_source'] = GoogleCodeAtomPoller( feedurl="", pollinterval=10800)
(note that you will need to download googlecode_atom.py from the Buildbot source and install it somewhere on your PYTHONPATH first)
Schedulers¶
Schedulers are responsible for initiating builds on builders.
Some schedulers listen for changes from ChangeSources and generate build sets in response to these changes. Others generate build sets without changes, based on other events in the [email protected]', '[email protected]' ] })
- 1 repository at the same time then a corresponding codebase definition should be passed for each repository. A codebase definition is a dictionary with one or more of the following keys: repository, branch, revision. The codebase definitions have also to be passed as dictionary.
codebases = {'codebase1': {'repository':'....', 'branch':'default', 'revision': None}, 'codebase2': {'repository':'....'} }
Important
codebases behaves also like a change_filter on codebase. The scheduler will only process changes when their codebases are found in codebases. By default codebases is set to {'':{}} which means that only changes with codebase '' (default value for codebase) will be accepted by the scheduler.
Buildsteps can have a reference to one of the codebases. The step will only get information (revision, branch etc.) that is related to that codebase. When a scheduler is triggered by new changes, these changes (having a codebase) will be incorporated by the new build. The buildsteps referencing to the codebases that have changes get information about those changes. The buildstep that references to a codebase that does not have changes in the build get the information from the codebases definition as configured in the scheduler.
- onlyImportant
- A boolean that, when True, only adds important changes to the buildset as specified in the fileIsImportant callable. This means that unimportant changes are ignored the same way a change_filter filters changes. This defaults to False and only applies when fileIsImportant is given.
The remaining subsections represent a catalog of the available Scheduler types. All these Schedulers are defined in modules under buildbot.schedulers, and the docstrings there are the best source of documentation on the arguments taken by each one.
Change Filters¶
Several schedulers perform filtering on an incoming set of changes. The filter can most generically be specified as a ChangeFilter. Set up a ChangeFilter like this:
from buildbot.changes.filter import ChangeFilter my_filter = = ChangeFilter(project = 'myproject')
or accept any of a set of values:
my_filter = ChangeFilter(project = ['myproject', 'jimsproject'])
or apply a regular expression, using the attribute name with a "_re" suffix:
- onlyImportant
- See Configuring Schedulers.
-.schedulers.basic import SingleBranchScheduler from buildbot.changes import filter quick = SingleBranchScheduler(name="quick", change_filter=filter.ChangeFilter(branch='master'), treeStableTimer=60, builderNames=["quick-linux", "quick-netbsd"]) full = SingleBranchScheduler(name="full", change_filter=filter buildbot.schedulers.basic.SingleBranchScheduler.
AnyBranchScheduler¶
This scheduler uses a tree-stable-timer like the default one, but uses a separate timer for each branch.
The arguments to this scheduler are:
name
builderNames
properties
fileIsImportant
change_filter
- onlyImportant
-.schedulers import basic tests = basic.SingleBranchScheduler(name="just-tests", treeStableTimer=5*60, builderNames=["full-linux", "full-netbsd", "full-OSX"]) package = basic
- periodicBuildTimer
- The time, in seconds, after which to start a build.
Example:
from buildbot.schedulers import timed nightly = timed
- codebases
- See Configuring Schedulers. Note that fileIsImportant and change_filter are only relevant if onlyIfChanged is True.
- onlyIfChanged
- If this is true, then builds will not be scheduled at the designated time unless the specified branch has seen an important change since the previous build.
- branch
- (required) The branch to build when the time comes. Remember that a value of None here.schedulers import timed c['schedulers'].append( timed( timed
- codebases
- See Configuring Schedulers.
This class is only useful in conjunction with the Trigger step. Here is a fully-worked example:
from buildbot.schedulers import basic, timed, triggerable from buildbot.process import factory from buildbot.steps import trigger checkin = basic.SingleBranchScheduler(name="checkin", branch=None, treeStableTimer=5*60, builderNames=["checkin"]) nightly = timed.Nightly(name='nightly', branch=None,.schedulers import basic, timed from buildbot.process import factory from buildbot.steps import shell, trigger checkin = basic.SingleBranchScheduler(name="checkin", branch=None, treeStableTimer=5*60, builderNames=["checkin"]) nightly = timed.NightlyTriggerable(name='nightly', builderNames=['nightly'], hour=3, minute=0) c['schedulers'] = [checkin, nightly] # on checkin, run tests checkin_factory = factory.BuildFactory() checkin_factory.addStep(shell.Test()) checkin_factory.addStep(trigger.Trigger(schedulerNames=['nightly']) # and every night, package the latest successful build nightly_factory = factory.BuildFactory() nightly_factory.addStep(shell.ShellCommand(command=['make', 'package']))
ForceScheduler Scheduler¶
username
codebases
A list of strings or CodebaseParameter specifying the codebases that should be presented. The default is a single codebase with no name.
properties
A list of parameters, one for each property. These can be arbitrary parameters, where the parameter's name is taken as the property name, or AnyPropertyParameter, which allows the web user to specify the property name.
An example may be better than long explanation. What you need in your config file is something like:
from buildbot.schedulers.forcesched import * sch = ForceScheduler(name="force", builderNames=["my-builder"], # will generate a combo box branch=ChoiceStringParameter(name="branch", choices=["main","devel"], default="main"), # will generate a text input reason=StringParameter(name="reason",label="reason:<br>", required=True, size=80), # will generate nothing in the form, but revision, repository, # and project are needed by buildbot scheduling system so we # need to pass a value ("") revision=FixedParameter(name="revision", default=""), repository=FixedParameter(name="repository", default=""), project=FixedParameter(name="project", default=""), # in case you dont require authentication this will display # input for user to type his name username=UserNameParameter(label="your name:<br>", size=80), # A completely customized property list. The name of the # property is the name of the parameter properties=[ BooleanParameter(name="force_build_clean", label="force a make clean", default=False), StringParameter(name="pull_url", label="optionally give a public git pull url:<br>", default="", size=80) ] ) c['schedulers'].append(sch)
ForceSched Parameters¶ # ... properties=[ InheritBuildParameter( name="inherit", label="promote a build for merge", compatible_builds=get_compatible_builds, required = True), ])
Build.
Builder accommodate. The function can optionally return a Deferred, which should fire with the same results.
- dictionary of Build Properties specific for this builder in this parameter. Those values can be used later on like other properties. Interpolate..
Requests are only candidated for a merge if both requests have exactly the same codebases. codebase,']), ]
Build command.
- forced builds -- The "Force Build" form allows users to specify properties
- buildslaves -- A buildslave.
Common Build Properties¶
The following build properties are set when the build is started, and are available to all steps.
- got_revision.
- workdir
- The absolute path of the base working directory on the slave, of the current builder.
For single codebase builds, where the codebase is '', the following Source Stamp Attributes are also available as properties: branch, revision, repository, and project .
Source Stamp Attributes¶
branch revision repository project codebase
changes
This attribute is a list of dictionaries reperesnting the changes that make up this sourcestamp.
has_patch patch_level patch_body patch_subdir patch_author patch_comment
These attributes are set if the source stamp was created by a try scheduler.
Using Properties in Steps¶
For the most part, properties are used to alter the behavior of build steps during a build. This is done by annotating the step definition in master.cfg with placeholders. When the step is executed,.
Property¶
The simplest form of annotation is to wrap the property name with Property:
from buildbot.steps.shell import ShellCommand from) ]))
The default value can reference other properties, e.g.,
command=Property('command', default Interpolate f.addStep(ShellCommand(command=[ 'make',.
- kw
- The key refers to a keyword argument passed to Interpolate.
The following ways of interpreting the value are available.
- -replacement
- If the key exists, substitute its value; otherwise, substitute replacement. replacement may.
Although these are similar to shell substitutions, no other substitutions are currently supported.
Example
from buildbot.steps.shell import ShellCommand from buildbot.process.properties import Interpolate f.addStep(ShellCommand(command=[ 'make', Interpolate('REVISION=%(prop:got_revision:-%(src::revision:-unknown)s)s') 'dist' ]))
In addition, Interpolate supports using positional string interpolation. Here, %s is used as a placeholder, and the substitutions (which may themselves be placeholders), are given as subsequent arguments:
..:
@properties.renderer def makeCommand(props): command = [ 'make' ] cpus = props.getProperty('CPUs') if cpus: command += [ '-j', str(cpus+1) ] else: command += [ '-j', '2' ] command += [ 'all' ] return command f.addStep(ShellCommand(command=makeCommand))
You can think of renderer as saying "call this function when the step starts".
WithProperties¶.steps.shell import ShellCommand from buildbot.process.properties import WithProperties f.addStep(ShellCommand( command=["tar", "czf",,)', now=Now())])
This is equivalent to:
@renderer def now(props): return time.clock() ShellCommand(command=['make', Interpolate('TIME=%(kw:now)', now=now)])
Note that a custom renderable must be instantiated (and its constructor can take whatever arguments you'd like), whereas a renderer can be used directly. basic behavior for a BuildStep is to:
- run for a while, then stop
- possibly invoke some RemoteCommands on the attached build slave
-.
- Step object itself.
- hideStepIf
A step can be optionally hidden from the waterfall and build details web pages. To do this, set the step's hideStepIf to).
The old source steps are imported like this:
from buildbot.steps.source import Git
while new source steps are in separate source-packages for each version-control system:
from buildbot.steps.source.git import Git version-control system for development..factory = BuildFactory() from buildbot.steps.source.mercurial import Mercurial factory.addStep change hooks enabled; as the buildslave to create the string that will be passed to the hg clone command.
- branchType
- either 'dirname' (default) or 'inrepo' depending on whether the branch name should be appended to the repourl.
from buildbot.steps.source.git import Git factory.addStep default branch of the remote repository will be used.
- submodules
- (optional): when initializing/updating a Git repository, this decides whether or not buildbot should consider git submodules. Default: False.
-.
- retryFetch
- (optional): this value defaults to False. In any case if fetch fails argument.
method
(optional): defaults to fresh when mode is full..
-.
getDescription
(optional) After checkout, invoke a git describe on the revision and save the result in a property; the property's name is either commit-description or commit-description-foo, depending on whether the codebase argument was also provided. The argument should either be a bool or dict, and will change how git describe is called:
-
getDescription=False: disables this feature explicitly
-
getDescription=True or empty dict(): Run git describe with no args
-
getDescription={...}: a dict with keys named the same as the git option. Each key's value can be False or None to explicitly skip that argument.
For the following keys, a value of True app URL argument/calc sub-tree, you would directly use repourl="" as an argument to your SVN step.
If you are building from multiple branches, then you should create the SVN step with the repourl and provide branch information with Interpolate:
from buildbot.steps.source.svn import SVN factory.append(SVN(mode='incremental', repourl=Interpolate('svn://svn.example.org/svn/%(src::branch)s/myproject')))
Alternatively, the repourl argument can be used to create the SVN step without Interpolate:
from buildbot.steps.source.svn import SVN factory.append(SVN(mode='full', repourl='svn://svn.example.org/svn/myproject/trunk'))
-.
- export
- Similar to method='copy', except using svn export to create build directory so that there are no .svn directories in the build directory..bzr all steps, this indicates the directory where the build will take place. Source Steps are special in that they perform some operations outside of the workdir (like creating the workdir itself).
- alwaysUseLatest
- if True, bypass the usual update to the last Change behavior, and always update to the latest changes varies parameter is specified, the repository url will be taken exactly from the Change attribute. You are looking for that one if your ChangeSource step has all information useful when the change source knows where the repository resides locally, but don't know the scheme used to access it. For instance ssh://server/%s makes sense if the the repository attribute is the local path of the repository.
- dict
- In this case, the repository URL will be the value indexed by the repository attribute in the dict given as parameter.
- callable
- The callable given as parameter will take the repository attribute from the Change and its return value will be used as repository URL. URL argument that will be given to the svn checkout command. It dictates both where the repository is located and which sub-tree should be extracted. In this respect, it is like a combination of the CVS cvsroot and cvsmodule arguments. For example, if you are using a remote Subversion repository which is accessible through HTTP at a URL of, and you wanted to check out the trunk/calc sub-tree, you would use svnurl="" as an argument to your SVN step.
- .
It is possible to mix to have a mix of SVN steps that use either the svnurl or baseURL arguments but not both at the same time.
-.
-.
The Mercurial step takes the following arguments:
- repourl
- (required unless baseURL is provided): the URL at which the Mercurial hg clone command.
- branchType
- either 'dirname' (default) or 'inrepo' depending on whether the branch name should be appended to the baseURL. master branch will be used.
-.
- jobs
- (optional, defaults to None): Number of projects to fetch simultaneously while syncing. Passed to repo sync subcommand with "-j". executing a process of some sort on the buildslave. buildslave to pass the string to /bin/sh for interpretation, which raises all sorts of difficult questions about how to escape or interpret shell metacharacters.
If command contains nested lists (for example, from a properties substitution), then that list will be flattened before it is executed.
On the topic of shell metacharacters, note that in DOS the pipe character (|) is conditionally escaped (to ^|) when it occurs inside a more complex string in a list of strings. It remains unescaped when it occurs as part of a single string or as a lone pipe in a list of strings.
- following example will prepend /home/buildbot/lib/python to any existing PYTHONPATH:
from buildbot.steps.shell import ShellCommand f.addStep(ShellCommand( command=["make", "test"], env={'PYTHONPATH': "/home/buildbot/lib/python"}))
To avoid the need of concatenating path together in the master config file, if the value is a list, it will be joined together using the right platform dependant separator.
Those variables support expansion so that if you just want to prepend /home/buildbot/bin to the PATH environment variable, you can do it by putting the value ${PATH} at the end of the value like in the example below. Variables that don't exist on the slave will be replaced by "".
from buildbot.steps.shell import ShellCommand f.addStep(ShellCommand( command=["make", "test"], env={'PATH': ["/home/buildbot/bin", "${PATH}"]}))
Note that environment values must be strings (or lists that are turned into strings). In particular, numeric properties such as buildnumber must be substituted using Interpolate.
-"]))
- descriptionSuffix
This is an optional suffix appended to the end of the description (ie, after description and descriptionDone). This can be used to distinguish between build steps that would display the same descriptions in the waterfall. This parameter may be set to list of short strings, a single string, or None.
For example, a builder might use the Compile step to build two different codebases. The descriptionSuffix could be set to projectFoo and projectBar, respectively for each step, which will result in the full descriptions compiling projectFoo and compiling projectBar to be shown in the waterfall.
- logEnviron
- If this option is True (the default), then the step's logfile will describe the environment variables on the slave. slave or newer.
-. e.g: {0:SUCCESS,1:FAILURE,2:WARNINGS}, will treat the exit code 2 as WARNINGS. The default is to treat just 0 as successful. ({0:SUCCESS}) any exit code not present in the dictionary will be treated as FAILURE.steps.shell import Configure f.addStep(Configure())
Compile¶.steps.shell import Compile.steps.shell import PerlModuleTest f.append --parallel option.
Slave Filesystem Steps¶
Here are some buildsteps for manipulating the slave's.
RemoveDirectory¶
This command recursively deletes a directory on the slave.
from buildbot.steps.slave import RemoveDirectory f.addStep(RemoveDirectory(dir="build/build"))
This step requires slave version 0.8.4 or later. .pdf file). sphinx-build) Indicates the executable to run.
- (optional) List of tags to pass to sphinx-build
- defines
- (optional) Dictionary of defines to overwrite values of the conf.py file.
- mode
- (optional) String, one of full.steps.python import PyLint f.addStep())
Transferring Files¶, and add a link to the HTML status to the uploaded file.
from buildbot.steps.shell import ShellCommand from buildbot.steps.transfer import FileUpload f.addStep(ShellCommand(command=["make", "docs"])) f.addStep(FileUpload(slavesrc="docs/reference.html", masterdest="/home/bb/public_html/ref.html", url="")).
Note
The copied file will have the same permissions on the master as on the slave, look at the mode= parameter to set it differently. (Buildslave.
Transfering Directories¶, and add a link to the uploaded documentation on the HTML status page. On the slave-side the directory can be found under docs:
from buildbot.steps.shell import ShellCommand from buildbot.steps.transfer import DirectoryUpload f.addStep(ShellCommand(command=["make", "docs"])) f.addStep(DirectoryUpload(slavesrc="docs", masterdest="~/public_html slave, see buildslave create-slave --umask to change the default one.(Interpolate("%(src::branch)s-%(prop:got_revision)s\n"), slavedest="buildid.txt"))
StringDownload works just like FileDownload except it takes a single argument, s, representing the string to download instead of a mastersrc argument.
from buildbot.steps.transfer import JSONStringDownload buildinfo = { branch: Property('branch'), got_revision: Property('got_revision') }..steps.master import MasterShellCommand f.addStep(MasterShellCommand( command=["make", "www"], env={'PATH': ["/home/buildbot/bin", "${PATH}"]}))
Note that environment values must be strings (or lists that are turned into strings). In particular, numeric properties such as buildnumber must be substituted using Interpolate.
- interruptSignal
- (optional) Signal to use to end the process, if the step is interrupted.
Setting Properties¶
These steps set properties on the master based on information from the slave.
SetProperty¶".
SetPropertiesFromEnv¶
Buildbot slaves .steps.slave import SetPropertiesFromEnv from buildbot.steps.shell import Compile f.addStep(SetPropertiesFromEnv(variables=["SOME_JAVA_LIB_HOME", "JAVAC"])) f.addStep(Compile(commands=[Interpolate("%(prop:JAVAC)s"), "-cp", Interpolate("%(prop:SOME_JAVA_LIB_HOME)s"))) })
The schedulerNames= argument lists the Triggerable schedulers. Hyperlinks are added to the waterfall and the build detail web pages for each triggered build. If this argument is False (the default) or not given, then the buildstep succeeds immediately after triggering the schedulers.
The SourceStamps to use for the triggered build are controlled by the arguments updateSourceStamp, alwaysUseLatest, and sourceStamps. If updateSourceStamp is True (the default), then step updates the SourceStamp`s given to the :bb:sched:`Triggerable schedulers to include got_revision (the revision actually used in this build) as revision (the revision to use in the triggered builds). This is useful to ensure that all of the builds use exactly the same SourceStamp`s, even if other :class:`Changes have occurred while the build was running. If updateSourceStamp is False (and neither of the other arguments are specified), then the exact same SourceStamps are used. If alwaysUseLatest is True, then no SourceStamps are given, corresponding to using the latest revisions of the repositories specified in the Source steps. This is useful if the triggered builds use to a different source repository. The argument sourceStamps accepts a list of dictionaries containing the keys branch, revision, repository, project, and optionally patch_level, patch_body, patch_subdir, patch_author and patch_comment and creates the corresponding SourceStamps. If only one sourceStamp has to be specified then the argument sourceStamp can be used for a dictionary containing the keys mentioned above. The arguments updateSourceStamp, alwaysUseLatest, and sourceStamp can be specified using properties.
The set_properties parameter")}
The copy_properties parameter, given a list of properties to copy into the new build request, has been deprecated in favor of explicit use of set_properties.
Debian Build Steps¶
DebPbuilder¶
The DebPbuilder step builds Debian packages within a chroot built by pbuilder. It populates the changeroot with a basic system and the packages listed as build requirement. The type of chroot to build is specified with the distribution, distribution and mirror parameter. To use pbuilder your buildbot must have the right to run pbuilder as root through sudo.
from buildbot.steps.package.deb.pbuilder import DebPbuilder f.addStep(DebPbuilder())
The step takes the following parameters
- architecture
- Architecture to build chroot for.
- distribution
- Name, or nickname, of the distribution. Defaults to 'stable'.
-.steps.package.deb.lintian import DebLintian f.addStep(DebLintian(fileloc=Interpolate("%(prop:deb-changes)s")))()) "exclusive".
A build or build step proceeds only when it has acquired all locks. If a build or step needs a lot of locks, it may be starved [3].
Status.status.html import WebStatus c['status'].append(html orLink which will be rendered on the WebStatus for unauthenticated users as a link named Login.
authz = Authz(useHttpHeader=True, httpLoginLink='')
A configuration example with Apache HTTPD as reverse proxy could look like the following.
authz = Authz( useHttpHeader=True, httpLoginLink='', auth =.
These arguments adds an URL link to various places in the WebStatus, such as revisions, repositories, projects and, optionally, ticket/bug references in change comments..
The Google Code hook is quite similar to the GitHub Hook. It has one option for the "Post-Commit Authentication Key" used to check if the request is legitimate:
c['status'].append(html'}}
The poller hook allows you to use GET requests to trigger polling. One advantage of this is your buildbot instance can (at start up) poll to get changes that happened while it was down, but then you can still use a commit hook to get fast notification of new changes.
Suppose you have a poller configured like this:
c['change_source'] = SVNPoller( svnurl="", split_file=split_file_branches)
And you configure your WebStatus to enable this hook:
c['status'].append(html.WebStatus( …, change_hook_dialects={'poller': True} ))
Then you will be able to trigger a poll of the SVN repository by poking the /change_hook/poller URL from a commit hook like this:
curl
If no poller argument is provided then the hook will trigger polling of all polling change sources.
You can restrict which pollers the webhook has access to using the allowed option:
c['status'].append(html.WebStatus( …, change_hook_dialects={'poller': {'allowed': ['']}} )) import cgi, datetime.
- all
- Always send mail about builds.
Defaults to (failing, passing, warnings).
-.
Regardless of the setting of lookup, MailNotifier will also send mail to addresses in the extraRecipients list.
- Interpolate dictionary from buildbot.status.builder import Results, SUCCESS, RETRY def gerritReviewCB(builderName, build, result, status, arg): if result == RETRY: return None, 0, 0 message = "Buildbot finished compiling your patchset\n" message += "on configuration: %s\n" % builderName message += "The result is: %s\n" % Results[result].upper() if arg: message += "\nFor more details visit:\n" message += status.getURLForThing(build) + "\n" # message, verified, reviewed return message, (result == SUCCESS or -1), 0 c['buildbotURL'] = '' c['status'].append(GerritStatusPush('127.0.0.1', 'buildbot', reviewCB=gerritReviewCB, reviewArg=c['buildbotURL']))
GerritStatusPush sends review of the Change back to the Gerrit server. reviewCB should return a tuple of message, verified, reviewed. If message is None, no review will be sent..changes.svnpoller import split_file_branches def split_file_projects_branches(path): if not "/" in path: return None project, path = path.split("/", 1) f = split_file_branches(path) if f: info = dict(project=project, path=f[1]) if f[0]: info['branch'] = f[0] return info return f
Again, this is provided by default. To use it you would do this:
from buildbot.changes.svnpoller import SVNPoller, split_file_projects_branches c['change_source'] = SVNPoller( svnurl="", split_file_stamp): return hashlib.md5 (source_stamp.repository).hexdigest()[:8] build_factory = factory.BuildFactory() build_factory.workdir = workdir build_factory.addStep(Git(mode="update")) # ... builders.append ({'name': 'mybuilder', 'slavename': 'myslave', 'builddir': 'mybuilder', 'factory': build_factory})
Running Commands¶
To spawn a command in the buildslave, create a RemoteCommand instance in your step's start method and run it with runCommand:
cmd = Rem", Interpolate("buildnum=%(prop:buildnumber)s")]", Interpolate("buildnum=%(prop:buildnumber)s")].
Developer Tools¶
These tools are provided for use by the developers who are working on the code that the buildbot is monitoring.
statuslog¶).
statusgui¶
If you have set up a PBListener, you will be able
to monitor your Buildbot using a simple Gtk+ application invoked with
the
The Change also has a comments attribute, which is a string containing any checkin comments. | http://docs.buildbot.net/0.8.7p1/full.html | 2015-05-22T10:00:10 | CC-MAIN-2015-22 | 1432207924919.42 | [array(['_images/index.png', 'index page'], dtype=object)
array(['_images/waterfall-empty.png', 'empty waterfall.'], dtype=object)
array(['_images/force-build.png', 'force a build.'], dtype=object)
array(['_images/runtests-success.png', 'an successful test run happened.'],
dtype=object)
array(['_images/irc-testrun.png',
'a successful test run from IRC happened.'], dtype=object)] | docs.buildbot.net |
Test whether each element of a 1-D array is also present in a second array.
Returns a boolean array the same length as ar1 that is True where an element of ar1 is in ar2 and False otherwise.
See also
Notes
in1d can be considered as an element-wise function version of the python keyword in, for 1-D sequences. in1d(a, b) is roughly equivalent to np.array([item in b for item in a]).
New in version 1.4.0.
Examples
>>> test = np.array([0, 1, 2, 5, 0]) >>> states = [0, 2] >>> mask = np.in1d(test, states) >>> mask array([ True, False, True, False, True], dtype=bool) >>> test[mask] array([0, 2, 0]) | http://docs.scipy.org/doc/numpy-1.7.0/reference/generated/numpy.in1d.html | 2015-05-22T10:05:11 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.scipy.org |
Help Center
Local Navigation
Spelling checker
You can use the items in the net.rim.blackberry.api.spellcheck package to add spelling checker functionality to an application. The SpellCheckEngine interface enables an application to check the spelling of a UI field value and provide a BlackBerry® device user with options for spelling corrections. The SpellCheckUI interface enables an application to provide a UI that allows a BlackBerry device user to resolve a spelling issue- by interacting with the SpellCheckEngine implementation.
For more information about using the Spell Check API, see the Spell Check sample application, which is provided with BlackBerry® Java® Development Environment 4.3.1 or later, and with the BlackBerry® Java® Plug-in for Eclipse®.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/17971/Spell_check_508128_11.jsp | 2015-05-22T10:09:53 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.blackberry.com |
Documentation for VirtueMart Breezing Forms Plugin PRO Version 3.0
Breezing Forms Plugin PRO for VirtueMart allows you to require custom Crosstec Breezing Forms to become part of the VirtueMart checkout workflow in Joomla when shoppers buy specific product(s).
VirtueMart 2 offers some great Custom Field functionality. But it can be a challenge to learn, and there are some things that no matter how hard you try, you just can't do yet.
For example, you can't make one custom field's choices dependent on the values selected in another custom field. Parent/child products can help address this, but you might not want to have a bunch of child product listings and SKUs just to achieve some basic dependencies.
With the VirtueMart Breezing Forms Plugin PRO v3.0, you can:
- Develop completely custom forms to collect ANY information you want
- Present those form(s) during checkout based on specific product selection
- Have infinite possibilities for conditional fields, complex form queries, and more by integrating with a powerful forms component
- Bring only the final shopper responses from the form that you choose and have them display inside the VirtueMart order
- Just imagine the possibilities...
Here are some ways it might be used:
- Selling Medicare approved products, where you need to collect patient information upon checkout, but only if they order specific qualified products
- Selling catering delivery, where shoppers need to provide directions and delivery instructions for each product or just once for the order
- A store that sells contacts, or eyeglasses, that needs to collect important information in a form at checkout, which they want to keep on file for subsequent orders
- Collect detailed information during purchases, and integrate the data with SalesForce (Breezing Forms v1.8 now supports this!)
- And lots more situations when you just need to do more than VirtueMart 2 custom fields can provide
What our extension does is simply allow Administrators to seamlessly integrate custom Breezing Form(s) into the VirtueMart checkout process. | http://docs.polishedgeek.com/wiki/display/BF/VirtueMart+Breezing+Forms+Plugin+PRO+Documentation | 2015-05-22T09:57:35 | CC-MAIN-2015-22 | 1432207924919.42 | [array(['/wiki/download/attachments/589833/PGExtHz1.jpg?version=1&modificationDate=1335128741000&api=v2',
None], dtype=object) ] | docs.polishedgeek.com |
Contacts
The Comorbidity4j tool and the Comorbidity4web Service are both developed and maintained by the:
Integrative Biomedical Informatics Group
part of the Research Programme on Biomedical Informatics (GRIB), a joint research programme of the Hospital del Mar Medical Research Institute (IMIM) and the Department of Experimental and Health Sciences of the Universitat Pompeu Fabra in Barcelona.
If you need any support in using the tool or if you want to provide us with feedback and suggestions, please send an email to francesco<DOT>ronzano<AT>upf<DOT>edu. | https://comorbidity4j.readthedocs.io/en/latest/Contacts/ | 2021-10-15T22:35:07 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['../img/logo.png', 'Comorbidity4j'], dtype=object)] | comorbidity4j.readthedocs.io |
MailChimp enables you to engage your leads with great email marketing campaigns. Connecting Cooladata to your MailChimp account will allow you to link these campaigns with any other data source, to capitalize on the leads that matter.
Each day, the integration fetches the previous day’s aggregative data and appends a row to the table. For example, on Aug 4th a row would be added showing aggregated data up until Aug 3rd, such as how many emails were sent from the beginning of the campaign. See the full table scheme below.
To integrate MailChimp to your Cooladata project please contact your customer success manager or email us at [email protected]. | https://docs.cooladata.com/mailchimp/ | 2021-10-15T22:35:28 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.cooladata.com |
Privacy Statement for Personally Identifiable Information of ngideas.com
Updated: Jan 6th, 2021
ngideas.com (referred to as “we”, “us”, “ngideas.com” your private tickets.
Summary
ngide.
ngide of minors
We do not allow minors (persons under the age of 13) to use our site. Any accounts found in violation of this term will be terminated without a refund and all information pertaining to that user account will be erased.
Use of collected information
ngideas.com will NOT sell, rent or otherwise willingly transmit the recorded personal information to any third party. Your privacy is important to us and we will not even offer aggregate information resulting from personal information processing to any part outside ngide ngide
You may opt for ngideas.com newsletter, however, in no case, the newsletter will be sent to you if you unsubscribe to it.
These forms of contact are conducted by automated software running on our server. You always have the chance to opt-out from such communication. ngide.
Contact for marketing purposes.
Information sharing with third parties
Information collected by ngide.
Duration of personal personal information for as long as we have a business relationship with you as evidenced by the existence of an active subscription or a log in to your account. denied ngideas.com? ngideas.com uses cookies to track your preferences regarding our site’s features, including the ticket system and the “remember me” login feature. We also use cookies to authenticate you to members-only features. If you block ngideas.com’s cookies you may not be able to use our members-only features as a consequence. Moreover, in order to ensure your security, ngide. ngideas.com employs the services of third-party services and content provides. Most notably we use Google Analytics to track website usage. Each of these services sets its own cookies. ngideas.com doesn’t have direct access to those cookies and the use of those third-party cookies is governed solely by the respective service/content provider’s Privacy Statement. For more information you may consult the following resources:
- Google privacy center. Information on use of cookies by Google AdSense and Google Analytics, as well as information from opting-out of the collection of such information.
Contact information
If you have any question regarding our privacy policy you may contact us by one of the following means:
- Sending an email to support at ngideasdot com
- You can use the “Support” link | https://docs.ngideas.com/privacy-policy/ | 2021-10-16T00:23:10 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.ngideas.com |
The bootstrap.xml file provided in the GitHub repository uses admin/admin as the username and password for the firewall administrator. Before deploying the CFT in a production environment, at a minimum, you must create a unique username and password for the administrative account on the VM-Series firewall. Optionally, you can fully configure the firewall with zones, policy rules, security profiles and export a golden configuration snapshot. You can then use this configuration snapshot as the bootstrap.xml file for your production environment.
Document:VM-Series Deployment Guide
Customize the Bootstrap.xml File
Last Updated:
May 1, 2020
Current Version:
7.1 (EoL) | https://docs.paloaltonetworks.com/vm-series/7-1/vm-series-deployment/set-up-the-vm-series-firewall-in-aws/customize-the-bootstrap-xml-file.html | 2021-10-16T00:28:49 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.paloaltonetworks.com |
You can configure DHCP on each segment regardless of whether the segment is connected to a gateway. Both DHCP for IPv4 (DHCPv4) and DHCP for IPv6 (DHCPv6) servers are supported.
- DHCP local server
- DHCP relay
- Gateway DHCP (supported only for IPv4 subnets in a segment)
For a standalone segment that is not connected to a gateway, only local DHCP server is supported.
- Segments configured with an IPv6 subnet can have either a local DHCPv6 server or a DHCPv6 relay. Gateway DHCPv6 is not supported.
- DHCPv6 Options (classless static routes and generic options) are not supported.
Prerequisites
- DHCP profile is added in the network.
- If you are configuring Gateway DHCP on a segment, a DHCP profile must be attached to the directly connected tier-1 or tier-0 gateway.
Procedure
- From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-address>.
- Either add or edit a segment.
- To configure a new segment, click Add Segment.
- To modify the properties of an existing segment, click the vertical ellipses next to the name of an existing segment, and then click Edit.
- If you are adding a segment, ensure that the following segment properties are specified:
- Segment name
- Connectivity
- Transport zone
- Subnets (required for a gateway-connected segment, optional for a standalone segment)
If you are editing an existing segment, go directly to the next step.
- Click Set DHCP Config.
- In the DHCP Type drop-down menu, select any one of the following types.On a segment, IPv6 and IPv4 subnets always use the same DHCP type. Mixed configuration is not supported.Note: In NSX-T Data Center 3.0 and 3.0.1, after the segment is created and the DHCP service is in use, you cannot change the DHCP type of a gateway-connected segment. Starting in version 3.0.2, you can change the DHCP type of a gateway-connected segment.
- In the DHCP Profile drop-down menu, select the name of the DHCP server profile or DHCP relay profile.
Note: In NSX-T Data Center 3.0 and 3.0.1, after the segment is created and the DHCP service is in use, you cannot change the DHCP profile of the segment. Starting in version 3.0.2, you can change the DHCP profile of the segment that uses a DHCP local server or a DHCP relay.
- If the segment is connected to a gateway, Gateway DHCP server is selected by default. The DHCP profile that is attached to the gateway is autoselected. The name and server IP address are fetched automatically from that DHCP profile and displayed in a read-only mode.
When a segment is using a Gateway DHCP server, ensure that an edge cluster is selected either in the gateway, or DHCP server profile, or both. If an edge cluster is unavailable in either the profile or the gateway, an error message is displayed when you save the segment.
- If you are configuring a local DHCP server or a DHCP relay on the segment, you must select a DHCP profile from the drop-down menu. If no profiles are available in the drop-down menu, click the vertical ellipses icon and create a DHCP profile. After the profile is created, it is automatically attached to the segment.
When a segment is using a local DHCP server, ensure that the DHCP server profile contains an edge cluster. If an edge cluster is unavailable in the profile, an error message is displayed when you save the segment.
- Click the IPv4 Server or IPv6 Server tab.If the segment contains an IPv4 subnet and an IPv6 subnet, you can configure both DHCPv4 and DHCPv6 servers on the segment.
- Configure the DHCP settings.
- Enable the DHCP configuration settings on the subnet by clicking the DHCP Config toggle button.
- In the DHCP Server Address text box, enter the IP addresses.
- If you are configuring a DHCP local server, server IP address is required. A maximum of two server IP addresses are supported. One IPv4 address and one IPv6 address. For an IPv4 address, the prefix length must be <= 30, and for an IPv6 address, the prefix length must be <= 126. The server IP addresses must belong to the subnets that you have specified in this segment. The DHCP server IP address must not overlap with the IP addresses in the DHCP ranges and DHCP static binding. The DHCP server profile might contain server IP addresses, but these IP addresses are ignored when you configure a local DHCP server on the segment.
After a local DHCP server is created, you can edit the server IP addresses on the Set DHCP Config page. However, the new IP addresses must belong to the same subnet that is configured in the segment.
- If you are configuring a DHCP relay, this step is not applicable. The server IP addresses are fetched automatically from the DHCP relay profile and displayed below the profile name.
- If you are configuring a Gateway DHCP server, this text box is not editable. The server IP addresses are fetched automatically from the DHCP profile that is attached to the connected gateway.
Remember, the Gateway DHCP server IP addresses in the DHCP server profile can be different from the subnet that is configured in the segment. In this case, the Gateway DHCP server connects with the IPv4 subnet of the segment through an internal relay service, which is autocreated when the Gateway DHCP server is created. The internal relay service uses any one IP address from the subnet of the Gateway DHCP server IP address. The IP address used by the internal relay service acts as the default gateway on the Gateway DHCP server to communicate with the IPv4 subnet of the segment.
After a Gateway DHCP server is created, you can edit the server IP addresses in the DHCP profile of the gateway. However, you cannot change the DHCP profile that is attached to the gateway.
- (Optional) In the DHCP Ranges text box, enter one or more IP address ranges.
Both IP ranges and IP addresses are allowed. IPv4 addresses must be in a CIDR /32 format, and IPv6 addresses must be in a CIDR /128 format. You can also enter an IP address as a range by entering the same IP address in the start and the end of the range. For example,
172.16.10.10-172.16.10.10.Ensure that DHCP ranges meet the following requirements:
Note: The following types of IPv6 addresses are not permitted in DHCP for IPv6 ranges:
- IP addresses in the DHCP ranges must belong to the subnet that is configured on the segment. That is, DHCP ranges cannot contain IP addresses from multiple subnets.
- IP ranges must not overlap with the DHCP Server IP address and the DHCP static binding IP addresses.
- IP ranges in the DHCP IP pool must not overlap each other.
- Number of IP addresses in any DHCP range must not exceed 65536.
Caution: After a DHCP server is created, you can update existing ranges, append new IP ranges, or delete existing ranges. However, it is a good practice to avoid deleting, shrinking, or expanding the existing IP ranges. For example, do not try to combine multiple smaller IP ranges to create a single large IP range. You might accidentally miss including IP addresses, which are already leased to the DHCP clients from the larger IP range. Therefore, when you modify existing ranges after the DHCP service is running, it might cause the DHCP clients to lose network connection and result in a temporary traffic disruption.
- Link Local Unicast addresses (FE80::/64)
- Multicast addresses (FF00::/8)
- Unspecified address (0:0:0:0:0:0:0:0)
- Address with all F (F:F:F:F:F:F:F:F)
- (Optional) (Only for DHCPv6): In the Excluded Ranges text box, enter IPv6 addresses or a range of IPv6 addresses that you want to exclude for dynamic IP assignment to DHCPv6 clients.
In IPv6 networks, the DHCP ranges can be large. Sometimes, you might want to reserve certain IPv6 addresses, or multiple small ranges of IPv6 addresses from the large DHCP range for static binding. In such situations, you can specify excluded ranges.
- (Optional) Edit the lease time in seconds.Default value is 86400. Valid range of values is 60–4294967295. The lease time configured in the DHCP server configuration takes precedence over the lease time that you specified in the DHCP profile.
- (Optional) (Only for DHCPv6): Enter the preferred time in seconds.
Preferred time is the length of time that a valid IP address is preferred. When the preferred time expires, the IP address becomes deprecated. If no value is entered, preferred time is autocalculated as (lease time * 0.8).
Lease time must be > preferred time.
Valid range of values is 60–4294967295. Default is 69120.
- (Optional) Enter the IP address of the domain name server (DNS) to use for name resolution. A maximum of two DNS servers are permitted.When not specified, no DNS is assigned to the DHCP client. DNS server IP addresses must belong to the same subnet as the subnet's gateway IP address.
- (Optional) (Only for DHCPv6): Enter one or more domain names.DHCPv4 configuration automatically fetches the domain name that you specified in the segment configuration.
- (Optional) (Only for DHCPv6): Enter the IP address of the Simple Network Time Protocol (SNTP) server. A maximum of two SNTP servers are permitted.
In NSX-T Data Center 3.0, DHCPv6 server does not support NTP.
DHCPv4 server supports only NTP. To add an NTP server, click Options, and add the Generic Option (Code 42 - NTP Servers).
- (Optional) Click Options, and specify the Classless Static Routes (Option 121) and Generic Options.
In NSX-T Data Center 3.0, DHCP Options for IPv6 are not supported.
- Each classless static route option in DHCP for IPv4 can have multiple routes with the same destination. Each route includes a destination subnet, subnet mask, next hop router. For information about classless static routes in DHCPv4, see RFC 3442 specifications. You can add a maximum of 127 classless static routes on a DHCPv4 server.
- For adding Generic Options, select the code of the option and enter a value of the option. For binary values, the value must be in a base-64 encoded format.
- Click Apply to save the DHCP configuration, and then click Save to save the segment configuration.
What to do next
- After a segment has DHCP configured on it, some restrictions and caveats apply on changing the segment connectivity. For more information, see Scenarios: Impact of Changing Segment Connectivity on DHCP.
- When a DHCP server profile is attached to a segment that uses a DHCP local server, the DHCP service is created in the edge cluster that you specified in the DHCP profile. However, if the segment uses a Gateway DHCP server, the edge cluster in which the DHCP service is created depends on a combination of several factors. For a detailed information about how an edge cluster is selected for DHCP service, see Scenarios: Selection of Edge Cluster for DHCP Service. | https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/administration/GUID-486C1281-C6CF-47EC-B2A2-0ECFCC4A68CE.html | 2021-10-16T00:53:05 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.vmware.com |
LDAP Security Manager
This page describes the configuration and runtime application instructions for the LDAP Security Manager policy. The LDAP Security Manager policy establishes the necessary configuration details for an Open LDAP or Active Directory LDAP that you have set up for your enterprise.
Prerequisites
This document assumes that you are an API Versions Owner for the API version that you want to manage, or that you are a member of the Organization Administrators role.
Configuring an LDAP Security Manager
Configure the LDAP Security Manager to connect to your LDAP or Active Directory. All fields are required. Enter literal string values or secure property placeholders as shown.
A secure value is a value that, once entered, is not visible or retrievable after the policy creates.
Example Configuration for an Active Directory Security Manager
Note that the search filter string given above is particular to Active Directory applications.
Applying Your LDAP Security Manager and Basic Auth Policies
Follow the steps below to apply these policies to your endpoint at runtime.
Navigate to the API version details page for the API version to which you want to apply the policy.
Click the Policies tab to open it.
Apply the LDAP Security Manager policy and configure it to connect to your LDAP.
Apply the HTTP Basic Authentication policy to enforce your security manager policy.
Verify that the security policy is now in effect – the endpoint of your API should now require authentication.
Note that you can apply LDAP Security Manager policy and enforce it with HTTP Basic Authentication policy even if your target service version or endpoint already has a security manager configured. The security management enforced by the API Manager overrides any other security manager policies you have applied.
Unapplying an LDAP Security Manager and Basic Auth Policies
To unapply the HTTP Basic Authentication backed by an LDAP Security Manager from your service version or endpoints, unapply the policies one at a time.
Unapply the HTTP Basic Authentication policy.
Unapply the LDAP Security Manager policy.
Hit the endpoint to confirm that your API no longer requires authentication. | https://docs.mulesoft.com/api-manager/ldap-security-manager | 2017-10-17T01:58:09 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.mulesoft.com |
zone.field.method.name
Syntax
- s := zone.field.method.name
- zone.field.method.name = s
Get/set the method used for extrapolation of zone-based variables.
This has no effect on gridpoint-based variables.
The available names are constant, average, inverse-distance-weighting, and polynomial. The default value is constant.
The extrapolation methods are used to determine values at the gridpoints of the zone. Then a weighting function is used to determine how values vary inside the zone. | http://docs.itascacg.com/flac3d700/flac3d/zone/doc/manual/zone_manual/zone_fish/zone.field_intrinsics/fish_zone.field.method.name.html | 2021-01-16T01:53:02 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.itascacg.com |
Getting Started
This guide will cover what you need to know to start using Test Environment, from installation of dependencies to configuration.
To learn instead how to migrate an existing Truffle project, check out our Migrating from Truffle guide.
Installing Dependencies
Unlike Truffle, Test Environment is a testing library: it provides utilities that help you write tests, but it doesn’t run your tests for you.
This is a good thing: it means you are free to use any test runner of your choice. Here we will use Mocha (along with Chai):
$ npm install --save-dev @openzeppelin/test-environment mocha chai
Writing Tests
Each test file should have a
require statement importing Test Environment. This will bring up the local blockchain where your tests will run, and provide utilities to interact with it.
const { accounts, contract } = require('@openzeppelin/test-environment');
The exports you will be using the most are
accounts and
contract.
accounts
This is an array with the addresses of all unlocked accounts in the local blockchain. They are also prefunded, so they can all be used to send transactions.
A good practice is to use array destructuring to give meaningful names to each account, such as:
const [ admin, purchaser, user ] = accounts;
This will let you write clearer tests by making it obvious which actor is executing each action:
const crowdsale = await Crowdsale.new({ from: admin }); await crowdsale.buyTokens(amount, { from: purchaser });
contract
You will use this function to load your compiled contracts into JavaScript objects. By default these will be Truffle contracts, but they can be configured to be web3 contracts.
// Loads the built artifact from build/contracts/Crowdsale.json const Crowdsale = contracts.fromArtifact('Crowdsale');
Test Cases
The overall structure of your tests will be determined not by Test Environment, but by your test runner of choice.
In Mocha, test cases are declared with the
it function, and grouped in
describe blocks.
We’ll use the following contract to provide an example. It consists of a simple access control mechanism via OpenZeppelin Contracts'
Ownable:
// contracts/MyContract.sol pragma solidity ^0.5.0; import "@openzeppelin/contracts/ownership/Ownable.sol"; contract MyContract is Ownable { }
And here’s how testing this contract using Test Environment and Mocha looks like:
//); }); });
In Choosing a Test Runner you can compare how this example looks in each test runner.
Running your Tests
Test Environment is not executable: tests are run by invoking your runner of choice directly. We recommend that you do this by adding a
test script to your
package.json.
This is what that looks like when using Mocha:
// package.json "scripts": { "test": "mocha --exit --recursive" }
All set! Running
npm test to execute the test suite:
$ npm test MyContract ✓ the deployer is the owner
Compiling your Contracts
Using OpenZeppelin Test Helpers
Complex assertions, such as testing for reverts or events being emitted, can be performed by using the OpenZeppelin Test Helpers.
When used alongside Test Environment, there is no need for manual configuration:
require the helpers and use them as usual.
Configuration
Multiple aspects of Test Environment can be configured. The default values are very sensible and should work fine for most testing setups, but you are free to modify these.
To do this, create a file named
test-environment.config.js at the root level of your project: its contents will be automatically loaded.
// test-environment.config.js module.exports = { accounts: { amount: 10, // Number of unlocked accounts ether: 100, // Initial balance of unlocked accounts (in ether) }, contracts: { type: 'truffle', // Contract abstraction to use: 'truffle' for @truffle/contract or 'web3' for web3-eth-contract defaultGas: 6e6, // Maximum gas for contract calls (when unspecified) // Options available since v0.1.2 defaultGasPrice: 20e9, // Gas price for contract calls (when unspecified) artifactsDir: 'build/contracts', // Directory where contract artifacts are stored }, node: { // Options passed directly to Ganache client gasLimit: 8e6, // Maximum gas per block gasPrice: 20e9 // Sets the default gas price for transactions if not otherwise specified. }, };
Advanced Options
These settings are meant to support more complex use cases: most applications will not require using them.
setupProvider
async function setupProvider(baseProvider)
Returns a new web3 provider that will be used by all contracts and helpers.
Often used to wrap the base provider in one that performs additional tasks, such as logging or Gas Station Network integration:
// test-environment.config.js module.exports = { setupProvider: (baseProvider) => { const { GSNDevProvider } = require('@openzeppelin/gsn-provider'); const { accounts } = require('@openzeppelin/test-environment'); return new GSNDevProvider(baseProvider, { txfee: 70, useGSN: false, ownerAddress: accounts[8], relayerAddress: accounts[9], }); }, };
fork and
unlocked_accounts
These options allow Test Environment’s local blockchain to fork an existing one instead of starting from an empty state. By using them you can test how your code interacts with live third party protocols (like the MakerDAO system) without having to deploy them yourself!
In forked mode, you will also be able to send transactions from any of the
unlocked_accounts (even if you don’t know their private keys!).
// test-environment.config.js module.exports = { node: { // Options passed directly to Ganache client fork: '{token}@{blocknumber}, // An url to Ethereum node to use as a source for a fork unlocked_accounts: ['0xAb5801a7D398351b8bE11C439e05C5B3259aeC9B'], // Array of addresses specifying which accounts should be unlocked. }, };
allowUnlimitedContractSize
Allows unlimited contract sizes. By enabling this flag, the check within the EVM for contract size limit of 24kB (see EIP-170) is bypassed. Useful when testing unoptimized contracts that wouldn’t be otherwise deployable.
// test-environment.config.js module.exports = { node: { // Options passed directly to Ganache client allowUnlimitedContractSize: true, // Allows unlimited contract sizes. }, }; | https://docs.openzeppelin.com/test-environment/0.1/getting-started | 2021-01-16T02:32:29 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.openzeppelin.com |
GroupDocs.Comparison for .NET 18.6 Release Notes
This page contains release notes for GroupDocs.Comparison for .NET 18.6
Major Features
Below the list of most notable changes in release of GroupDocs.Comparison for .NET 18.6:
- Implemented update changes for Comparison.Diagrams
- Added comparison settings for Comparison.Diagrams
- Implemented Comparison.Html settings
- Improve comparing of Diagrams documents (increased comparions accuracy, fixed issue in comparing most common cases for Diagram’s comparing)
- Implement comparison of new document components for Comparison.Html
Full List of Issues Covering all Changes in this Release
Public API and Backward Incompatible Changes
This section lists public API changes that were introduced in GroupDocs.Comparison for .NET 18 | https://docs.groupdocs.com/comparison/net/groupdocs-comparison-for-net-18-6-release-notes/ | 2021-01-16T01:53:55 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.groupdocs.com |
Announcing: Systems are coming back online.
November 6, 2020, 9:49 AM: Repairs on the cooling system are underway. No ETA, but the systems will likely be back some time today.
November 6, 2020, 4:27 AM: Cooling system failure, datacentre is shut down. | https://docs.scinet.utoronto.ca/index.php?title=Main_Page&oldid=2871&diff=prev | 2021-01-16T02:40:11 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.scinet.utoronto.ca |
Information about a message.
Contents
- messageId
The ID you want to assign to the message. Each
messageIdmust be unique within each batch sent.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 128.
Required: Yes
- payload
The payload of the message. This can be a JSON string or a base64-encoded string representing binary data, in which case you must decode it by means of a pipeline activity.
Type: Base64-encoded binary data object
Required: Yes
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_Message.html | 2021-01-16T03:29:52 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.aws.amazon.com |
Before.
The steps to create a PlotX market are simple:
1. If you see that a particular market is not live, then, you can click on the “Create Market” button to start the market creation process.
2. Once you click on the button, the market creation transaction will be automatically initiated.
3. You will need to approve the transaction via your connected wallet and pay the required Gas fees in ETH.
4. Once the transaction is complete the market will become live for predictions to be placed in it.
Once the market is live, you will be able to claim your rewarded $PLOT from the My Account page instantly.
Market Creator can also claim a minimum of 0.5% and a maximum of 5% of the reward pool of their created market based on the number of $PLOT staked by them in Play Mining.
Whenever a user creates any market by signing the market creation transaction, one of the incentives they receive for paying the necessary Gas fees, is in the form of a dynamically calculated amount of $PLOT tokens which will potentially have the same value as the amount of $ETH the user pays in Gas fees for the market creation transaction.
The exact amount of $PLOT the market creator ends up receiving is calculated in the following manner:
Incentive in $PLOT for market creation = (Gas Amount Used * Gas Price) / Price of PLOT in ETH
Here,
Gas Price = Whichever of the following is the minimum: i. Gas Price sent by market creator ii. Fast Gas Price as per Chainlink Oracle with a deviation of 25% iii. Maximum Gas Price (currently set at 100 Gwei)
PLOT price in ETH = Price taken from the PlotX smart contracts that are maintained with cumulative hourly price update
Gas Amount Used = Gas Amount used by the transaction to mine
The intention of the above-mentioned formula is to return 100% of the gas fee incurred by the Market Creator in the form of $PLOT, as long as the gas price is less than 100 Gwei.
Chainlink maintains an on-chain measure of fast gas required to mine transactions on Ethereum. Chainlink node operators update the gas price feed on-chain if there is a 25% deviation from the current gas price recorded on-chain. Hence, in such a case, the 2nd factor = 1.25 * on chain fast gas price provided by Chainlink.
It is important to note that in order to avoid drainage of funds and unnecessary gas wars, the gas price being compensated for is subject to some limits. It is possible that a user pays 1000 gas price while the fast gas price of 60 could have been sufficient. Hence, in order to avoid similar situations that may lead to gas wars and drainage of funds, the limits of Chainlink based data at 25% deviation is put in place.
In order to provide this incentive to the market creators, the PlotX protocol charges a fee from the market participants. This fee is taken as a percentage of the participation amount; for participations in $ETH, the fee is 0.1%, while for participations in $PLOT, the fee is 0.05%.
Any leftover fees, apart from the incentives of Market Creators, are accrued in the DAO on-chain, and the community can decide on how to use the DAO fund via on-chain governance.
For Market Creator to claim a %age of the reward pool of the created market, they need to stake $PLOT tokens for 30 days under Play Mining and then create the market.
Based on the amount of $PLOT staked for play mining by the market creator, the share of the reward pool is computed, beginning at 0.5% and capped at 5% of the reward pool.
The created market must accumulate a minimum liquidity equivalent to 1 ETH in PLOT and/or ETH for this incentive to be activated. This means that the combined value of PLOT and/or ETH staked in each of the three options should be greater than the value of 1 ETH.
The %age of the reward pool the market creator ends up getting is subject to the following conditions:
0.5% until the staked amount by the market creator is less than or equal to 25,000 $PLOT. Hence, even if the market creator hasn’t staked any $PLOT under Play Mining, they’ll still get a 0.5% share of the reward pool of their created market.
Increment of 0.5% with every additional 25,000 $PLOT that the market creator stakes under Play Mining.
A maximum of 5% when the staked amount becomes equal to or greater than 225,000 $PLOT under Play Mining.
The final %age share of the reward pool = Whichever of the following is minimum
5%
0.5% + absolute($PLOT staked in Play Mining / 25,000) * 0.5%
The ETH and/or PLOT (as per the reward pool split) that the market creator gets from the reward pool are claimable once the market has settled.
Incentivizing the market creator with a share of the reward pool aligns their interests towards grassroots activation for building liquidity in their markets.
Hence, the total incentive for market creation includes potentially 100% of the gas fee incurred in equivalent $PLOT (assuming the gas price is <= 100 Gwei), and an opportunity to claim up to 5% of the Reward Pool of the created market.
Claiming the rewards you get from creating markets is very simple. Just:
Navigate to the My Account page;
Scroll down to the “Market Creation Rewards” section;
See stats like the number of the markets that you have created, the reward pool share you got for each of those markets, how much Gas you paid and how much of it you got back, etc.
See the amount of PLOT and/or ETH that is pending to be claimed and click the Claim button to actually claim them.
Approve the initiated claim transaction and you’ll get the specified amount of ETH and/or PLOT in your connected wallet.
A few things to note here are: -
Once the market creation transaction is successful, you will be able to claim your rewarded PLOT instantly by following the above-mentioned steps.
If your created market accrues liquidity greater than or equal to the value of 1 ETH, then, you will be able to claim 0.5% to 5% of the reward pool (depending on how many PLOTs you have staked in Play Mining) once the market has settled by following the above-mentioned steps.
The amount of ETH and/or PLOT you get as the reward pool share depends on the split between the two currencies in the reward pool. If the reward pool comprises 10 ETH and 10,000 PLOT, and you had 250,000 PLOT staked under Play Mining before creating the market, then, you will be able to claim 0.5 ETH and 500 PLOT as your reward pool share once the market settles.
It is important to note that there may come a situation where multiple users are trying to create the same market simultaneously. In such a scenario, only the user whose transaction is successful first will receive the rewards that come with creating a market.
And once any one of the transactions goes through, the market will be created and all the other transactions will automatically fail.
Hence, it is advised that the user keeps this piece of information in mind while choosing the Gas settings for their market creation transaction.
Any user can create a market by signing the market creation transaction in the market creation page that becomes active right after a market ends.
When a daily market that is asking the price of BTC at 1:00 PM today ends, the market creation page for the daily market asking the price BTC at 1:00 PM tomorrow will become active. During the time this market creation page is active, any user can sign the market creation transaction to create the 1:00 PM daily market of BTC.
Let’s say that Alex signs the market creation transaction during this time and spend $25 worth of $ETH in gas cost to create the 1:00 PM daily market of BTC. Now, provided that the gas price he paid for the transaction isn’t greater than 100 Gwei, Alex will receive back $25 worth of $PLOT as soon as his transaction goes through and the market is created.
Now, assuming that Alex has 250,000 $PLOT staked in Play Mining, he will own 5% of the reward pool of the market he just created. Let’s say that post-settlement, the reward pool of this market is worth $10,000. Then, this would mean that Alex will now get $500 worth $ETH and/or $PLOT from the reward pool as an additional incentive for creating the market.
Hence, his total incentive for market creation becomes $525 ($25 equivalent in $PLOT and $500 equivalent in $PLOT and/or $ETH, based on the reward pool split).
PlotX offers three distinct time frames, namely 4 Hour,. | https://docs.plotx.io/getting-started/how-to-create-markets-on-plotx | 2021-01-16T02:58:01 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.plotx.io |
Let's go through the files and folders of the app.
Our project will have
ios and an
android folder at the root of the project. These are entirely separate native projects that should be considered part of your Ionic app, this means you need to check them into source control and edit them in their own IDEs. I suggest you to read more about Capacitor in this post.
In the root of the project we have the following important files and folders:
/node_modules
The npm packages installed in the project with the
npm install command.
/resources
when building the app for different platforms (like iOS or android), this folder will be automatically generated with the app resources like the logo and the splash screen image. You should put your own resources here.
/src
This is the most important folder and where the majority of the app will be developed. Here we have all the files that make our ionic app and is the location where we will spend most of our time.
/www
This folder is generated automatically and you shouldn't change anything from here. It's where all the compiled files will go.
angular.json
Main configuration file for the Angular CLI..
We have already talked about the configuration for the workflow enough, now let's dive in the details of the
/src folder.
Inside of the
/src directory we find our raw, uncompiled code. This is where most of the work for your Ionic app will take place.
Note: testing is not implemented for this template. If you want to add testing check this guide.
When we start the scripts that handle the bundling/compilation workflow, our code inside of
/src gets bundled and transpiled into the correct Javascript version that the browser understands (currently, ES5). That means we can work at a higher level using TypeScript, but compile down to the older form of Javascript the browser needs.
Under this folder you will find the following important folders and files:
/app
Has all the components, modules, pages, services and styles you will use to build your app.
/assets
In this folder you will find sample images, sample-data json’s, and any other asset you may require in your app.
/environments
Under this folder are configuration files used by the Angular CLI to manage the different environment variables.
For example we could have a local database for our development environment and a product database for production environment.
When we run
ng serve it will use by default the dev environment.
/theme
It includes all the theming, variables and sass mixins to be used in our ionic app.
index.html
This is the main entry point for the app, though its purpose is to set up scripts, CSS includes, or start running our app. We won’t spend much of our time in this file.
tsconfig.app.json
This file extends
tsconfig.json main file and adds some specific configuration for the app. It’s then used in
angular.json
tsconfig.server.json
This file extends
tsconfig.json main file and adds some specific configuration for the server. It’s then used in
angular.json
manifest.json
This file is the app manifest to be used for the progressive web app. We will explain more about this in PWA section.
The App folder is the largest folder because it contains all the code of our ionic app. It has all the components, modules, pages, services and styles you will use to build your app.
This is the core of the project. Let’s have a look at the structure of this folder so you get an idea where to find things and where to add your own modules to adapt this project to your particular needs.
We designed this ionic project with a modular approach.
We strive to showcase an advanced app module architecture so you get a better idea on how to structure and scale your project. Again, modules are great to achieve scalability.
Inside the
/app folder there are also some important files:
app.component.html
This serves as the skeleton of the app. Typically has a
<ion-router-outlet> to render the routes and their content. It can also be wrapped with content that you want to be in every page (for example a footer). In this app we added the side menu in this file.
app.component.ts
It’s the Angular component that provides functionality to the html file I just mentioned about. In this template we have all the code for the side menu.
app.module.ts
This is the main module of the ionic project.
app-routing.module.ts
Here we define the main routes. Child routes of other lazy modules are defined inside those modules. These routes are registered to the Angular
RouterModule in the
AppModule.
In this folder you will find images, sample-data json’s, and any other asset you may require in your app.
Here you will find all the variables, mixins, shared styles, etc, that will make your app customizable and extendable.
Here is where you will change the main colors of the app to match your styles.
Maybe you don’t know Sass? Briefly, it is a superset of css that will ease and speed your development cycles incredibly. | https://ionic-5-full-starter-app-docs.ionicthemes.com/code-structure | 2021-01-16T02:59:41 | CC-MAIN-2021-04 | 1610703499999.6 | [] | ionic-5-full-starter-app-docs.ionicthemes.com |
Cameo DataHub 19.0 LTR SP4 Documentation
Cameo Data Modeler Plugin
Methodology Wizard Plugin – 2021 No Magic, Incorporated, a Dassault Systèmes company – All Rights Reserved.
Company
Resources
Connect | https://docs.nomagic.com/display/CDH190SP4/Cameo+DataHub+Documentation?reload=true | 2021-01-16T02:24:27 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.nomagic.com |
Alternative Node.js install
How to use NVM to run different versions of Node.js
Node Version Manager or NVM is a tool for managing multiple versions of Node.js in one installation.
You can use NVM with any of our container types that have node installed to change or update the version. This may be useful, for example, where a container has a Long Term Release (LTS) version available, but you would like to use the latest.
Installing NVM is done in the build hook of your
.platform.app.yaml, which some additional calls to ensure that environment variables are set correctly.
variables: env: # Update these for your desired NVM and Node versions. NVM_VERSION: v0.36.0 NODE_VERSION: v14.13.1 hooks: build: | unset NPM_CONFIG_PREFIX export NVM_DIR="$PLATFORM_APP_DIR/.nvm" # install.sh will automatically install NodeJS based on the presence of $NODE_VERSION curl -f -o- | bash [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
And in a
.environment file in the root of your project:
# This is necessary for nvm to work. unset NPM_CONFIG_PREFIX # Disable npm update notifier; being a read only system it will probably annoy you. export NO_UPDATE_NOTIFIER=1 # This loads nvm for general usage. export NVM_DIR="$PLATFORM_APP_DIR/.nvm" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" | https://docs.platform.sh/languages/nodejs/nvm.html | 2021-01-16T03:22:20 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.platform.sh |
To lay the foundations of the project, the first community members advertised "Gigs & Rewards" on a dedicated Telegram Channel. The idea was to gather additional skills (technical, marketing, design, writing...) to community-build the ecosytem from scratch. Various community volunteers have been rewarded with YFD Tokens for their unvaluable contribution (details on Telegram Channel and Etherscan).
Remember: YFD is a token issued without any value, initially a zero value token. Its value increase thanks to value created by the community in YfDFI Finance ecosystem. So, it's a good start to reward early volunteers with a governance token that is gaining in value day after day thanks to community contribution (kind of community-value backed token).
To make it possible, 2000 YFD tokens have been reserved for "Gigs & Rewards". A public wallet available for consultation is holding and distributing those tokens to early volunteers responding to "Gigs & Rewards" calls and delivering the expected results.
In the near future, a fully decentralized rewards distribution system will be put in place. For this, we will rely on the DFI.Governance voting system and the DFI.Ventures community fund. The combination of these two solutions will firstly allow us to community-vote improvements to fund, and secondly to be able to unlock funds from the community fund. | https://docs.yfdfi.finance/general/gigs-and-rewards | 2021-01-16T03:08:37 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.yfdfi.finance |
ansible.builtin.file – read file contents¶
Note
This module is part of
ansible-base and included in all Ansible
installations. In most cases, you can use the short module name
file
if read in variable context, the file can be interpreted as YAML if the content is valid to the parser.
this lookup does not understand ‘globing’, use the fileglob lookup instead.
Examples¶
- debug: msg="the value of foo.txt is {{lookup('file', '/etc/foo.txt') }}" - name: display multiple file contents debug: var=item with_file: - "/path/to/foo.txt" - "bar.txt" # will be looked in files/ dir relative to play or in role - "/path/to/biz.txt"
Return Values¶
Common return values are documented here, the following are the fields unique to this lookup: | https://docs.ansible.com/ansible/latest/collections/ansible/builtin/file_lookup.html | 2021-01-16T02:45:43 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.ansible.com |
constructor.
Convenience constructor. Assumes the value of
readable is
false. The Texture returned by
texture will not have its texture data accessible from script.
using System.Collections; using UnityEngine; using UnityEngine.Networking;
public class Example : MonoBehaviour { IEnumerator Start() { using (var uwr = new UnityWebRequest("", UnityWebRequest.kHttpVerbGET)) { uwr.downloadHandler = new DownloadHandlerTexture(); yield return uwr.SendWebRequest(); GetComponent<Renderer>().material.mainTexture = DownloadHandlerTexture.GetContent(uwr); } } }
Constructor, allows TextureImporter.isReadable property to be set.
The value in
readable will be used to set the TextureImporter.isReadable property when importing the downloaded texture data. | https://docs.unity3d.com/ScriptReference/Networking.DownloadHandlerTexture-ctor.html | 2021-01-16T03:52:22 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.unity3d.com |
A class that can be used to implement a "busy" indicator. More...
#include <busy_indicator.h>
A class that can be used to implement a "busy" indicator.
The exact form of the busy indicator is unspecified. It could be a "spinner" cursor in a GUI context, for example.
This base class provides a "null" implementation, and can be overriden for specific behaviours.
THe busy-ness semantics are defined by this object's lifetime.
Definition at line 40 of file busy_indicator.h.
A factory function that returns a new busy indicator.
Because BUSY_INDICATORs are RAII objects (i.e. the busy-ness is defined by the object's lieftime), it's convenient to pass a factory function for a client to be able to make a busy indicator when needed.
Definition at line 50 of file busy_indicator.h.
This class is intended to be handled by pointer-to-base class. | https://docs.kicad-pcb.org/doxygen/classBUSY__INDICATOR.html | 2020-03-28T15:14:17 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.kicad-pcb.org |
The main provisions of the Consumer Rights Act 2015 are now in force and a large number of our document templates have been reviewed and updated to help you in getting along with the new Act. As well as bringing many different pieces of legislation together, the Act has also brought with it some new requirements for traders to comply with and remedies to help protect consumers when things don’t work out. In this post we’ll be taking a look at the rules covering the provision of services.
Legal Requirements for Services
When it comes to the requirements set out in the law, it should come as no surprise (or if it does – as a pleasant one) that your legal obligations do little more than echo good business sense.
What is important here from the legal point of view is the actual performance of the service, not the end result (but let’s face it – you want to keep your customers happy so the end result should be quite important to you!). As for what “reasonable skill and care” is, that will take account of various factors, including prevailing standards in your particular industry or sector, and the price paid for your services.
What about the reasonable price and time requirements? For the most part, you won’t find these mentioned in our templates because these rules apply only if the relevant information has not already been given to the customer, or is not already included in the contract (or the customer has not paid a price). The bottom line: you should always ensure that your customer knows what’s going on. By making sure that any and all information given to the customer at all stages (both before and after any contract has been made) is detailed and clear – especially on these points – it will be what you have agreed between you and not what the law implies when it comes to price and time for performance that matters. Again, your goal should be a happy customer and keeping people in the dark doesn’t usually lead to that outcome – so this should be an easy requirement to meet and you have most likely been meeting it since day one!
New Rules on Information
As under the recent Consumer Contracts Regulations, any information you provide to your customers about yourself or your services can be taken as a contractual term. In addition, the Consumer Rights Act bestows similar status on information provided voluntarily where that information is taken into account by the customer when deciding whether or not to enter into a contract, or where it is taken into account when the customer makes a decision about the service after entering into the contract.
It is important to note that this does not tie you to everything you say – any qualifying statements made at the same time will be taken into account when determining whether or not something you have said or written to the customer should be treated as contractually binding. Nevertheless, when considering statements made in advertising and other forms of marketing, this is an important point to be aware of.
Returning briefly to the Consumer Contracts Regulations and, more specifically, the pre-contract information requirements that they set out, any such information will also be treated as a contract term.
So what if things change? The key point to the rules governing the binding nature of information given by traders to consumers appears to be the prevention of unilateral changes or – to put it in blunter terms – getting the customer’s business by promising one thing, but actually giving them another. The information can be changed, provided both the trader and the customer expressly agree to it. Once more, then, although these are legal requirements and breaching them could have serious ramifications, if you are running an honest business and not trying to mislead your customers, compliance should be a virtual given.
What Could Possibly Go Wrong?
If something goes wrong and it turns out that you have not complied with your obligations in some way, the Consumer Rights Act has introduced new remedies for consumers purchasing services.
If the service is not performed with reasonable skill and care, the customer will have the right to repeat performance. If that isn’t possible, or isn’t done within a reasonable time or without inconvenience to the customer, they will have the right to a reduction in price (up to the full price for the service).
If the service isn’t performed within a reasonable time (though remember what we said above about specifying such information in the contract), the customer may have the right to a price reduction.
What about the all-important information? If the service isn’t performed in accordance with information you have provided about it, the same remedies of repeat performance and price reduction will again apply. If, on the other hand, the problem relates to information you’ve provided about yourself (as opposed to the service), the only remedy on offer from the Consumer Rights Act is a price reduction.
Repeat Performance
The goal of this remedy is to put things right, leaving the customer in the position he or she would have been in had the service been performed correctly in the first place. The “repeat” part, then, doesn’t necessarily refer to the whole kit and caboodle – you must only perform the service again to the extent required to ensure compliance with the contract.
This must be done within a reasonable time and without causing your customer significant inconvenience. What’s more, you must not charge the customer for repeat performance – the cost is yours to bear and yours alone.
Remember, if the repeat performance can’t be carried out within a reasonable time, without significant inconvenience to the customer, or if it is simply not possible, the customer should be given a price reduction.
Price Reduction
In cases where your customer may be entitled to a price reduction, this can be any amount up to and including the full price. The Consumer Rights Act refers to the price reduction being of “an appropriate amount” – this essentially refers to the difference in value between the service the customer should have received and the value of that which they have actually received. Remember also that the customer may be entitled to a price reduction if you have provided incorrect information about something else, for example, your business.
Where the customer has already paid something, they may be entitled to a refund as a result of the price reduction. Under the Consumer Rights Act, refunds must be given “without undue delay” and in any case, within 14 calendar days starting on the day that you agree your customer is entitled to the refund. Unless the customer expressly agrees otherwise, you must use the same payment method originally used by the customer when they paid in the first place – so no refunding them with useless vouchers when they paid by debit card! Finally, you may not impose any fee on the customer for issuing the refund. But you weren’t going to do that anyway, were you?
Onward!
It is, without a doubt, important to be aware of your obligations under the Consumer Rights Act, and the remedies open to consumers should you fail to comply with those obligations in some way. With that said, nothing here should come as a particular surprise and, as we have noted more than once, if your goal in business is to keep your customers happy and informed, complying with these rules should be a cinch.
Join us in our next blog post for details on the new digital content provisions of the Consumer Rights Act and in the meantime, feel free to drop us a line with any comments you might have on your life as a service provider under this shiny new legislation! | https://blog.simply-docs.co.uk/category/uncategorized/ | 2020-03-28T14:58:08 | CC-MAIN-2020-16 | 1585370491998.11 | [] | blog.simply-docs.co.uk |
Transaction Model
Transactions record the payment history of an order.
A transaction can be of a certain type and status, and contain information relevant to the payment and communication to the third party payment gateway.
idid
The Commerce ID of the transaction.
hashhash
The unique hash identifier of the transaction as created by Craft Commerce.
typetype
The type of transaction. Possible values are:
purchase The transaction represents a purchase or immediate payment request. If this transaction type succeeds, the charge on the gateway took the funds from the customers credit card immediately and payment has been made.
authorize The transaction represents an authorization of a payment with the gateway. If successful, the payment was successfully authorized, but an additional capture action needs to take place for the funds to be taken from the credit card.
capture This transaction represents a capture of a previous
authorize transaction. If this transaction type succeeds, the charge on the gateway took the funds from the customers credit card and payment has been made. This transaction is always the child of an authorize transaction.
refund This transaction represents a refund of a payment. It is always the child transaction of either a
purchase or
capture transaction. You can not refund an authorization.
amountamount
The amount of the transaction. This amount is in the primary currency.
paymentAmountpaymentAmount
The payment amount of the transaction, which is the amount sent and used when communicating with the payment gateway. This amount is in the currency of the order’s payment currency.
paymentRatepaymentRate
This stores the currency conversion rate of the order’s payment currency at the time payment was attempted.
statusstatus
The status of the transaction. Possible values are:
pending The transaction is pending getting a
redirect,
success or
failed status.
redirect The initial transaction was registered successfully with the offsite gateway, and we have been told to redirect to the offsite gateway. This will be the status while the customer is on the gateways offsite page.
success The transaction is successful.
failed The transaction failed. See the transaction
code and
message to find out more information.
referencereference
The reference of the transaction as defined by the gateway.
messagemessage
The plain text message response from the gateway. Usually a sentence. This message is used to show to the customer if the transaction failed.
responseresponse
The full response data from the gateway, serialized as JSON. Useful for debugging.
codecode
The response code from the gateway. This will usually align in its meaning with the
parentIdparentId
Some transactions are children of another transaction. For example, capture transactions are children of authorize transactions, and refund transactions are children of capture or purchase transactions.
orderorder
The Order model this transaction belongs to.
orderIdorderId
The order ID of the Order model this transaction belongs to.
paymentMethodIdpaymentMethodId
The ID of the payment method used for communicating with the third party gateway. | https://docs.craftcms.com/commerce/v1/transaction-model.html | 2020-03-28T14:11:26 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.craftcms.com |
Locating your database
The PhraseExpander database contains all your data and it's normally located in the Documents folder.
The database must not be shared with other people to avoid data corruption.
It has the pedb extension (it was ipdb in PhraseExpander 4) and its default name is PhraseExpanderData.pedb
The name of the opened database is shown in the title bar and in the status bar.
To locate the file in Windows Explorer
- 1
Click on the name of the database in the status bar
- 2
Windows opens the Explorer and shows the file
| https://docs.phraseexpander.com/article/106-locating-your-database | 2020-03-28T15:38:48 | CC-MAIN-2020-16 | 1585370491998.11 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5738e180c6979143dba7890f/images/5e70f9f62c7d3a7e9ae94fc4/file-PGhNKA7E1x.png',
None], dtype=object) ] | docs.phraseexpander.com |
Create a new text fill-in variable
Use fill-ins to create request additional information through a form before the template is sent to the target application.
This tutorial explains how to display a textbox to request additional information where the user can type freely
To create a new fill-in
- 1
In the Quick edit or the Full editor, right-click and choose Variable → New fill-in
- 2
Assign a variable name (should be unique in the template), how it should be displayed (text, multi-selection list, calendar) and the appropriate options.
In this case, we select Text as Display as.
- 3
Click on the Preview button to display a preview of the variable
- 4
Click OK to confirm. The variable placeholder is inserted into the template. The variable can be reused multiple times in the template. | https://docs.phraseexpander.com/article/84-create-a-new-text-fill-in-variable | 2020-03-28T15:21:41 | CC-MAIN-2020-16 | 1585370491998.11 | [] | docs.phraseexpander.com |
Backups: Upload, deletion and versioning¶
Assuming that you have already a collection and an access token, then we can start uploading files that will be versioned and stored under selected collection.
Uploading a new version to the collection¶
You need to submit file content in the HTTP request body. The rest of the parameters such as token you need to pass as GET parameters.
POST /repository/collection/{{collection_id}}/backup?_token={{token_that_allows_to_upload_to_allowed_collections}} .... FILE CONTENT THERE ....
Pretty simple, huh? As the result you will get the version number and the filename, something like this:
{ "status": "OK", "error_code": null, "exit_code": 200, "field": null, "errors": null, "version": { "id": "69283AC3-559C-43FE-BFCC-ECB932BD57ED", "version": 1, "creation_date": { "date": "2019-01-03 11:40:14.669724", "timezone_type": 3, "timezone": "UTC" }, "file": { "id": 175, "filename": "ef61338f0dsolidarity-with-postal-workers-article-v1" } }, "collection": { "id": "430F66C3-E4D9-46AA-9E58-D97B2788BEF7", "max_backups_count": 2, "max_one_backup_version_size": 1000000, "max_collection_size": 5000000, "created_at": { "date": "2019-01-03 11:40:11.000000", "timezone_type": 3, "timezone": "UTC" }, "strategy": "delete_oldest_when_adding_new", "description": "Title: Solidarity with Postal Workers, Against State Repression!", "filename": "solidarity-with-postal-workers-article" } }
Required permissions:
- collections.upload_to_allowed_collections
Deleting a version¶
A simple DELETE type request will delete a version from collection and from storage.
DELETE /repository/collection/{{collection_id}}/backup/BACKUP-ID?_token={{token}}
Example response:
{ "status": "OK, object deleted", "error_code": 200, "exit_code": 200 }
Required permissions:
- collections.delete_versions_for_allowed_collections
Getting the list of uploaded versions¶
To list all existing backups under a collection you need just a collection id, and the permissions.
GET /repository/collection/{{collection_id}}/backup?_token={{token}}
Example response:
{ "status": "OK", "error_code": null, "exit_code": 200, "versions": { "3": { "details": { "id": "A9DAB651-3A6F-440D-8C6D-477F1F796F13", "version": 3, "creation_date": { "date": "2019-01-03 11:40:24.000000", "timezone_type": 3, "timezone": "UTC" }, "file": { "id": 178, "filename": "343b39f56csolidarity-with-postal-workers-article-v3" } }, "url": "" }, "4": { "details": { "id": "95F12DAD-3F03-49B0-BAEA-C5AC3E8E2A30", "version": 4, "creation_date": { "date": "2019-01-03 11:47:34.000000", "timezone_type": 3, "timezone": "UTC" }, "file": { "id": 179, "filename": "41ea3dcca9solidarity-with-postal-workers-article-v4" } }, "url": "" } } }
Required permissions:
- collections.list_versions_for_allowed_collections
Downloading uploaded versions¶
Given we upload eg. 53 versions of a SQL dump, one each month and we want to download latest version, then we need to call the fetch endpoint with the “latest” keyword as the identifier.
GET /repository/collection/{{collection_id}}/backup/latest?password={{collection_password_to_access_file}}&_token={{token}}
If there is a need to download an older version of the file, a version number should be used, eg. v49
GET /repository/collection/{{collection_id}}/backup/v49?password={{collection_password_to_access_file}}&_token={{token}}
There is also a possibility to download a last copy from the bottom, the oldest version available using keyword first.
GET /repository/collection/{{collection_id}}/backup/first?password={{collection_password_to_access_file}}&_token={{token}}
In case we have an ID of the version, then it could be inserted directly replacing the alias keyword.
GET /repository/collection/{{collection_id}}/backup/69283AC3-559C-43FE-BFCC-ECB932BD57ED?password=thats-a-secret&_token={{token}}
Required permissions:
- collections.list_versions_for_allowed_collections
- (knowing the password for the collection file)
Notes:
- The password for the file is inherited from collection, but it may be different in case when the collection would have changed the password, old files would not be updated! | https://file-repository.docs.riotkit.org/en/stable/domain/backup/versioning.html | 2020-03-28T15:17:36 | CC-MAIN-2020-16 | 1585370491998.11 | [] | file-repository.docs.riotkit.org |
Introduction
In this tutorial, you will learn how to use Couchbase Lite in a React Native project.
The sample project is an application that allows users to search and bookmark hotels from a Couchbase Lite database. The application contains 2 screens:
Bookmarks Screen: to list the bookmarked hotels. You can unbookmark a previously bookmarked hotel from this screen
Search Screen: to search for hotels by providing a location and/or full-text search query. You can bookmark (or unbookmark) a hotel from this screen.
Architecture
The user Interface is written in JavaScript while the business logic and data model is written in native Java. The data model uses Couchbase Lite as the embedded data persistence layer. React Native module acts as the bridging layer between the JavaScript layer and the native Java layer.
This architecture allows you to write the User Interface code once for both iOS and Android apps while leveraging Couchbase Lite’s native Android framework for data management.
Data Model
The data model for the app is very straightforward. There are two types of documents:
The "bookmarkedhotels" document which includes the list of Ids corresponding to the hotels that have been bookmarked
The "hotel" document which contains the details of the bookmarked hotel. The bookmarkedhotels document references the hotel document.
Note that although we have modeled our data using a "by Reference" /normalized model, since Couchbase Lite is JSON Document store, you have the flexibility to embed the details of the hotels within the bookmarkedhotels hotels. | https://docs.couchbase.com/tutorials/hotel-finder/introduction.html | 2021-07-23T23:25:20 | CC-MAIN-2021-31 | 1627046150067.51 | [array(['_images/datamodel.png', 'datamodel'], dtype=object)] | docs.couchbase.com |