_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d19801 | test | A simple way to do it is to put your GDI+ 1.1 code in #ifdefs and compile it to two different DLLs -- one with the code and one without. Then at runtime load the DLL that will work. Maybe you could even attempt to load the 1.1 DLL and if that fails fall back to the 1.0 DLL. | unknown | |
d19802 | test | I can see your website is still unsecured, for what it's worth, get yourself letsencrypt ssl.
Back to you question, go to your database, open the wp_options table, change the siteurl item to https://tourpoules.nl and also change the home item to https://tourpoules.nl.
A: If you have used search and replace DB master script or plugin it will not update inside meta files as well as and check for the function file have you Enqueue with https://
So will be better if you download SQL file and replace with below:
From:
http://new.tourpoules.nl
To
https://new.tourpoules.nl
and re-upload again | unknown | |
d19803 | test | using the requests and lxml libraries (lxml for xpath) this becomes a fairly straightforward task:
1 import requests
2 from lxml import etree
3
4 s = requests.session()
5 r = s.get("https://fbref.com/fr/comps/13/calendrier/Scores-et-tableaux-Ligue-1")
6 tree = etree.HTML(r.content)
7 matchreporturls = tree.xpath('//td[@data-stat="match_report"]/a[text()="Rapport de match "]/@href')
8
9 for matchreport in matchreporturls:
10 r = s.get("https://fbref.com" + matchreport)
11 # do something with the response data
12 print('scraped {0}'.format(r.url)) | unknown | |
d19804 | test | You can read from however many nodes you want in a Cloud Function. However, only one can trigger the function to run.
To read from your database use the following code:
admin.database().ref('/your/path/here').once('value').then(function(snapshot) {
var value = snapshot.val();
});
You will probably want to read from the same place that the Cloud Function was triggered. Use context.params.PARAMETER to get this information. For the example you posted your code would turn out looking something like this:
admin.database().ref('/GroupChat/'+context.params.Modules+'/SDevtChat/'+context.params.SDevtChatId+'/from').once('value').then(function(snapshot) {
var value = snapshot.val();
});
A: Just trigger your function one level higher in the JSON:
exports.sendNotification7 =
functions.database.ref('/GroupChat/{Modules}/SDevtChat/{SDevtChatId}')
.onWrite(( change,context) =>{
// Grab the current value of what was written to the Realtime Database.
var eventSnapshot = change.after.val();
console.log(eventSnapshot);
var str = "New message from System Development Group Chat: " + eventSnapshot.message;
var from = eventSnapshot.from;
... | unknown | |
d19805 | test | To get your grubby hands on exactly what Access is doing query-wise behind the scenes there's an undocumented feature called JETSHOWPLAN - when switched on in the registry it creates a showplan.out text file. The details are in
this TechRepublic article alternate, summarized here:
The ShowPlan option was added to Jet 3.0, and produces a text file
that contains the query's plan. (ShowPlan doesn't support subqueries.)
You must enable it by adding a Debug key to the registry like so:
\\HKEY_LOCAL_MACHINE\SOFTWARE\MICROSOFT\JET\4.0\Engines\Debug
Under the new Debug key, add a string data type named JETSHOWPLAN
(you must use all uppercase letters). Then, add the key value ON to
enable the feature. If Access has been running in the background, you
must close it and relaunch it for the function to work.
When ShowPlan is enabled, Jet creates a text file named SHOWPLAN.OUT
(which might end up in your My Documents folder or the current
default folder, depending on the version of Jet you're using) every
time Jet compiles a query. You can then view this text file for clues
to how Jet is running your queries.
We recommend that you disable this feature by changing the key's value
to OFF unless you're specifically using it. Jet appends the plan to
an existing file and eventually, the process actually slows things
down. Turn on the feature only when you need to review a specific
query plan. Open the database, run the query, and then disable the
feature.
For tracking down nightmare problems it's unbeatable - it's the sort of thing you get on your big expensive industrial databases - this feature is cool - it's lovely and fluffy - it's my friend… ;-)
A: Could you not throw a packet sniffer (like Wireshark) on the network and watch the traffic between one user and the host machine?
A: If it uses an ODBC connection you can enable logging for that.
*
*Start ODBC Data Source Administrator.
*Select the Tracing tab
*Select the Start Tracing Now button.
*Select Apply or OK.
*Run the app for awhile.
*Return to ODBC Administrator.
*Select the Tracing tab.
*Select the Stop Tracing Now button.
*The trace can be viewed in the location that you initially specified in the Log file Path box.
A: First question: Do you have a copy of MS Access 2000 or better?
If so:
When you say the MDB is "password protected", do you mean that when you try to open it using MS Access you get a prompt for a password only, or does it prompt you for a user name and password? (Or give you an error message that says, "You do not have the necessary permissions to use the foo.mdb object."?)
If it's the latter, (user-level security), look for a corresponding .MDW file that goes along with the MDB. If you find it, this is the "workgroup information file" that is used as a "key" for opening the MDB. Try making a desktop shortcut with a target like:
"Path to MSACCESS.EXE" "Path To foo.mdb" /wrkgrp "Path to foo.mdw"
MS Access should then prompt you for your user name and password which is (hopefully) the same as what the VB6 app asks you for. This would at least allow you to open the MDB file and look at the table structure to see if there are any obvious design flaws.
Beyond that, as far as I know, Eduardo is correct that you pretty much need to be able to run a debugger on the developer's source code to find out exactly what the real-time queries are doing...
A: It is not possible without the help of the developers. Sorry. | unknown | |
d19806 | test | You could provide an optional template argument which is a comparator (I think the standard lib does that frequently). Less ambitiously, you could use type{} to compare against which should work for anything with a default ctor: if(element != type{}). (Your problem is not that string doesn't have a comparison operator but that the operators aren't defined for comparisons with ints). | unknown | |
d19807 | test | You need to correctly define the dependency with the Firebase Admin SDK for Node.js, and initialize it, as shown below.
You also need to change the way you declare the function: exports.wooCommerceWebhook = async (req, res) => {...} instead of exports.wooCommerceWebhook = functions.https.onRequest(async (req, res) => {...});. The one you used is for Cloud Functions deployed through the CLI.
package.json
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": { "firebase-admin": "^9.4.2" }
}
index.js
const admin = require('firebase-admin')
admin.initializeApp();
exports.wooCommerceWebhook = async (req, res) => { // SEE COMMENT BELOW
const payload = req.body;
// Write to Firestore - People Collection
await admin.firestore().collection("people").doc().set({
people_EmailWork: payload.billing.email,
});
// Write to Firestore - Volociti Collection
await admin.firestore().collection("volociti").doc("fJHb1VBhzTbYmgilgTSh").collection("orders").doc("yzTBXvGja5KBZOEPKPtJ").collection("orders marketplace orders").doc().set({
ordersintuit_CustomerIPAddress: payload.customer_ip_address,
});
// Write to Firestore - Companies Collection
await admin.firestore().collection("companies").doc().set({
company_AddressMainStreet: payload.billing.address_1,
});
return res.status(200).end();
}; | unknown | |
d19808 | test | You generate a different query depending on which fields are entered.
Luckily this isn't too hard in SQL: all those fields are in the WHERE clause:
$where = [ 'foo' => 'bar', 'baz' => 0 ];
$sth = $db->prepare( "SELECT * FROM $table WHERE " .
implode( " AND ",
array_map( function($i) { return "$i=?"; }, array_keys( $where ) ) )
);
$sth->execute( array_values( $where ) );
Of course, if there are relationships between the fields, the query may become more complicated, but this is the gist of it.
A: Learning this takes time and patience I have cut past the variables from the form
if(isset($_POST['Search'])){
$packID = $_POST['packID'];
$supplier_name = $_POST['supplier_name'];
$timber_species = $_POST['timber_species'];
$timber_product = $_POST['timber_product'];
$timber_grade = $_POST['timber_grade'];
$timber_finish = $_POST['timber_finish'];
$timber_treatment = $_POST['timber_treatment'];
$width = $_POST['width'];
$thickness = $_POST['thickness'];
$length = $_POST['length'];
$markup = $_POST['markup'];
} else{
$packID="";
$supplier_name="";
$timber_species="";
$timber_product="";
$timber_grade="";
$timber_finish="";
$timber_treatment="";
$width= "";
$thickness= "";
$length="";
}
How would you write this Kenney when the variables may or may not be set. I must admit I am a novice and keen to learn. It takes time and patience. | unknown | |
d19809 | test | <script src="http://maps.googleapis.com/maps/api/js?sensor=false"></script>
In the above script the tag sensor is not required in "src" instead of that please provide a googlemapapi key, it will work!!
eg:
<script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?key=paste your key here"></script>
Get your key:
https://developers.google.com/maps/documentation/javascript/get-api-key#key
Thanks!!! | unknown | |
d19810 | test | well as far i can get it, you need to make some kind of sequence for your IntegerField.
while it's not and id field, you can emmulate it by taking max value of that field among all objects:
max_val = YourModelClass.objects.all().aggregate(Max('that_field_in_question'))
# then just make new object, and assign max+1 to 'that_field_in_question'
new_obj = YourModelClass('that_field_in_question'=max_val + 1)
new_obj.save() | unknown | |
d19811 | test | After doing some more digging I found what I was looking for. I found a rather well written example (in C#) of a technique called polygon clipping. This method finds the contact points in world coordinates. It goes through all the steps and code implementation for multiple different situations.
Here is the url: http://www.codezealot.org/archives/394 | unknown | |
d19812 | test | Though Chepner is right that awk or sed are not exact tools for xml in case you are NOT having xmlstarlet in your system then try following.
echo $newdt
20181108
awk -v dat="$newdt" 'match($0,/>[0-9]+</){$0=substr($0,1,RSTART) dat substr($0,RSTART+RLENGTH-1)} 1' Input_file
A: If sed works for you -
sed -Ei 's/( name="END_DATE")>20181031</\1>20181108</' test.xml
And xml parser is probably a better idea, though.
If you need to embed the variable -
sed -Ei "s/( name=\"END_DATE\")>20181031</\1>$newdt</" test.xml | unknown | |
d19813 | test | Looks like you need to create the /data/db folder
try doing this in the terminal
sudo mkdir /data/db
then start mongodb
A: Mongo by default writes data to /data folder and the user who is running mongo service does not have permission to create /data folder.
You can get this information from this log snippet
2017-05-05T23:33:06.816+0600 I STORAGE [initandlisten] exception in initAndListen: 29 Data directory /data/db not found., terminating
2017-05-05T23:33:06.816+0600 I NETWORK [initandlisten] shutdown: going to close listening sockets...
So, you need to do this
sudo mkdir /data/db
sudo chown $USER -R /data/db # give permission to the user who is running mongo service
A: first run mongod.exe If it gives any warnings regarding unsafe shutdowns, metrics, diagnostics ignore them and run mongo.exe in another CLI(command line Interface).
Even then if it does not work, just backup the ../data/db directories and redo the database.
Before access the db using a database driver(eg: mongoose,mongojs) make sure that the database is up and running.
$ mongo MongoDB shell version v3.4.4
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.4
use YourAwesomeDatabaseName
switched to db YourAwesomeDatabaseName
You are good to go!
A: I had an issue /mongo.js:237:13 in local terminal trying to run GCP-hosted mongo. I fixed it by removing tags in GCP
A: hey there This is the issue because of your mongodb server is not running
you can verify this by
sudo service mongodb status
**
you need to start the server by running command
sudo service mongodb start
Good Luck | unknown | |
d19814 | test | Foo<int> and Foo<double> are 2 different classes despite they share the name Foo, so you can't just put them to vector as is. But you can use boost::variant and store a vector of variants.
A: A solution would be to have your Foo inheriting from an empty base class
struct CommonBase {};
template<typename T>
class Foo : public CommonBase
{
// ...
};
and then have a container of pointers to the common base
vector<CommonBase*> v;
If you want to keep away from inheritance, you could use boost:any to store any type in your container.
An interesting topic to look into (if you want to manually implement this kind of things) is type erasure | unknown | |
d19815 | test | Okay, the problem is solved. The code shown above is working. There seems to be a problem with the opened docx document (a customer form) - it may be corrupt.
After using another form the code works :/ | unknown | |
d19816 | test | Your conditions is not clear, but mybe this what you want:
DECLARE @T TABLE (Id INT, Name VARCHAR(25), Value INT);
DECLARE @YourId INT = 1;
DECLARE @YourName VARCHAR(25) ='a';
/**/
INSERT INTO @T VALUES
(1, 'a', 7),
(2, 'c', 7),
(1, 'g', 1),
(2, 'c', 2),
(4, 'g', 5),
(6, 't', 4);
/*First query*/
SELECT *
FROM @T
WHERE ID = @YourID AND Name = @YourName;
/*Second query*/
SELECT *
FROM @T
WHERE ID = @YourID;
If you want the result of both queries in one result, then you can use UNION ALL as:
SELECT *
FROM @T
WHERE ID = @YourID AND Name = @YourName
UNION ALL
SELECT *
FROM @T
WHERE ID = @YourID;
Demo.
A: Well, I am not sure, what OP wanted, but maybe it was something like this?
// records where id and value are the same
SELECT * FROM @T WHERE ID=Value;
// other records having the same ids as abobe, but DIFFERENT values
SELECT * FROM @T WHERE ID IN
(SELECT ID FROM @T WHERE ID=Value)
AND Id!=Value;
Results:
Id Name Value
1 g 1
2 c 2
Id Name Value
1 a 7
2 c 7
Thanks to @Sami for providing the fiddle which I modified into this DEMO. | unknown | |
d19817 | test | You should probably look at the CJK package that is in the contrib area of Lucene. There is an analyzer and a tokenizer specifically for dealing with Chinese, Japanese, and Korean.
A: I found lucene-gosen while doing a search for my own purposes:
Their example looks fairly decent, but I guess it's the kind of thing that needs extensive testing. I'm also worried about their backwards-compatibility policy (or rather, the complete lack of one.) | unknown | |
d19818 | test | If your PDF file is in firebase storage you can create a url for your PDF with firebase storage. Then open this URL in a Web_view with webview_flutter plugin. | unknown | |
d19819 | test | You can use this regular expression to extract all rgb codes:
var regex = /rgb\(([^\)]+)\)/g;
A: You can use this:
string.replace(/^.*?linear-gradient *\((.+)/, function($1, $2) {
return $1.match(/rgb *\([^)]+\)/g); } );
//=> rgb(100, 106, 237),rgb(101, 222, 108)
Assuming there is no other rgb segment outside closing bracket of linear-gradient
A: There is no need for your first extracting. Try the following:
var re = /rgb\(\d{1,3}, ?\d{1,3}, ?\d{1,3}\)/g;
// and then get array of matches
var rgbs = string.match(re);
//It will be equal to null if there is no matches. | unknown | |
d19820 | test | The documentation for compareTo mentions this situation:
It is strongly recommended, but not strictly required that
(x.compareTo(y)==0) == (x.equals(y))
Generally speaking, any class that implements the Comparable interface and violates this condition should clearly indicate this fact. The recommended language is "Note: this class has a natural ordering that is inconsistent with equals."
Therefore, if you want your object to be Comparable and yet still not allow two UNKNOWN objects to be equal via the equals method, you must make your compareTo "Inconsistent with equals."
An appropriate implementation would be:
public int compareTo(Tag t) {
return this.id.compareTo(t.id);
}
Otherwise, you could make it explicit that UNKNOWN values in particular are not Comparable:
public static boolean isUnknown(Tag t) {
return t == UNKNOWN || (t != null && "UNKNOWN".equals(t.id));
}
public int compareTo(Tag t) {
if (isUnknown(this) || isUnknown(t)) {
throw new IllegalStateException("UNKNOWN is not Comparable");
}
return this.id.compareTo(t.id);
}
A: You're correct that your compareTo() method is now inconsistent. It violates several of the requirements for this method. The compareTo() method must provide a total order over the values in the domain. In particular, as mentioned in the comments, a.compareTo(b) < 0 must imply that b.compareTo(a) > 0. Also, a.compareTo(a) == 0 must be true for every value.
If your compareTo() method doesn't fulfil these requirements, then various pieces of the API will break. For example, if you sort a list containing an UNKNOWN value, then you might get the dreaded "Comparison method violates its general contract!" exception.
How does this square with the SQL requirement that null values aren't equal to each other?
For SQL, the answer is that it bends its own rules somewhat. There is a section in the Wikipedia article you cited that covers the behavior of things like grouping and sorting in the presence of null. While null values aren't considered equal to each other, they are also considered "not distinct" from each other, which allows GROUP BY to group them together. (I detect some specification weasel wording here.) For sorting, SQL requires ORDER BY clauses to have additional NULLS FIRST or NULLS LAST in order for sorting with nulls to proceed.
So how does Java deal with IEEE 754 NaN which has similar properties? The result of any comparison operator applied to NaN is false. In particular, NaN == NaN is false. This would seem to make it impossible to sort floating point values, or to use them as keys in maps. It turns out that Java has its own set of special cases. If you look at the specifications for Double.compareTo() and Double.equals(), they have special cases that cover exactly these situations. Specifically,
Double.NaN == Double.NaN // false
Double.valueOf(Double.NaN).equals(Double.NaN) // true!
Also, Double.compareTo() is specified so that it considers NaN equal to itself (it is consistent with equals) and NaN is considered larger than every other double value including POSITIVE_INFINITY.
There is also a utility method Double.compare(double, double) that compares two primitive double values using these same semantics.
These special cases let Java sorting, maps, and so forth work perfectly well with Double values, even though this violates IEEE 754. (But note that primitive double values do conform to IEEE 754.)
How should this apply to your Tag class and its UNKNOWN value? I don't think you need to follow SQL's rules for null here. If you're using Tag instances in Java data structures and with Java class libraries, you'd better make it conform to the requirements of the compareTo() and equals() methods. I'd suggest making UNKNOWN equal to itself, to have compareTo() be consistent with equals, and to define some canonical sort order for UNKNOWN values. Usually this means sorting it higher than or lower than every other value. Doing this isn't terribly difficult, but it can be subtle. You need to pay attention to all the rules of compareTo().
The equals() method might look something like this. Fairly conventional:
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
return obj instanceof Tag && id.equals(((Tag)obj).id);
}
Once you have this, then you'd write compareTo() in a way that relies on equals(). (That's how you get the consistency.) Then, special-case the unknown values on the left or right-hand sides, and finally delegate to comparison of the id field:
public int compareTo(Tag o) {
if (this.equals(o)) {
return 0;
}
if (this.equals(UNKNOWN)) {
return -1;
}
if (o.equals(UNKNOWN)) {
return 1;
}
return id.compareTo(o.id);
}
I'd recommend implementing equals(), so that you can do things like filter UNKNOWN values of a stream, store it in collections, and so forth. Once you've done that, there's no reason not to make compareTo consistent with equals. I wouldn't throw any exceptions here, since that will just make standard libraries hard to use.
A: The simple answer is: you shouldn't.
You have contradiction requirements here. Either your tag objects have an implicit order (that is what Comparable expresses) OR you can have such "special" values that are not equal to anything, not even themselves.
As the other excellent answer and the comments point out: yes, you can somehow get there; for example by simply allowing for a.compare(b) < 0 and b.compare(a) < 0 at the same time; or by throwing an exception.
But I would simply be really careful about this. You are breaking a well established contract. And the fact that some javadoc says: "breaking the contract is OK" is not the point - breaking that contract means that all the people working on this project have to understand this detail.
Coming from there: you could go forward and simply throw an exception within compareTo() if a or b are UNKNOWN; by doing so you make at least clear that one shouldn't try to sort() a List<Tag> for example. But hey, wait - how would you find out that UNKNOWN is present in your list? Because, you know, UNKNOWN.equals(UNKNOWN) returns false; and contains() is using equals.
In essence: while technically possible, this approach causes breakages wherever you go. Meaning: the fact that SQL supports this concept doesn't mean that you should force something similar into your java code. As said: this idea is very much "off standards"; and is prone to surprise anybody looking at it. Aka "unexpected behavior" aka bugs.
A: A couple seconds of critical thinking:
There is already a null in Java and you can not use it as a key for a reason.
If you try and use a key that is not equal to anything else including
itself you can NEVER retrieve the value associated with that key! | unknown | |
d19821 | test | The Session_End event fires when the server-side session times out, which is (default) 20 minutes after the last request has been served. The server does NOT know when the user "navigates away" or "closes the browser", so can't act on that.
A: You could use onUserExit jQuery plugin to call some server side code and abandon the session. Activate onUserExit on document ready :
<script type="text/javascript">
jQuery(document).ready(function () {
jQuery().onUserExit({
execute: function () {
jQuery.ajax({
url: '/EndSession.ashx', async: false
});
},
internalURLs: 'www.yourdomain.com|yourdomain.com'
});
});
</script>
And in EndSession.ashx abandon the session and server side Session_End will be called :
public void ProcessRequest(HttpContext context)
{
context.Session.Abandon();
context.Response.ContentType = "text/plain";
context.Response.Write("My session abandoned !");
}
note that this will not cover all cases (for example if user kills browser trough task manager).
A: No, It will take time to update until session is timedout...
A: It's the limitation of this approach that server will think user is logged in unless the session ends actually; which will happen only when the number of minutes has passed as specified in the session timeout configuration.
Check this post: http://forums.asp.net/t/1283350.aspx
Found this Online-active-users-counter-in-ASP-NET | unknown | |
d19822 | test | The payment wall iframe is configured by paypal with a whitelist of allowed domains (see Content Security Policy).
*
*The image URL must be https (specified in the docs)
*Your images must be on a server on the whitelist.
*This includes: The image URL can not have a custom port.
*PayPal automatically adds the domain containing the PPP script to the whitelist.
If you look into the browser console you might be able to see an error like this:
Refused to load the image 'https://my-domain.de:7443/images/payment_sofort_small.png' because it violates the following Content Security Policy directive: "img-src https://*.paypalobjects.com https://ak1s.abmr.net https://ak1.abmr.net https://ak1s.mathtag.com https://akamai.mathtag.com https://my-domain.de".
This is how it looked in my case: | unknown | |
d19823 | test | Using template_redirect hook, you can redirect user to a specific page when there is "no results found" on a product query search, like:
add_action( 'template_redirect', 'no_products_found_redirect' );
function no_products_found_redirect() {
global $wp_query;
if( isset($_GET['s']) && isset($_GET['post_type']) && 'product' === $_GET['post_type']
&& ! empty($wp_query) && $wp_query->post_count == 0 ) {
wp_redirect( get_permalink( 99 ) );
exit();
}
}
Code goes in functions.php file of your active child theme (or active theme). Tested and works. | unknown | |
d19824 | test | Your code contains some errors. See a solution in the snippet:
const csvData = `date,added,updated,deleted
2021-09-15,10,9,8
2021-09-16,20,11,7
2021-09-17,15,12,9
2021-09-18,20,9,8
2021-09-19,20,9,8
`;
const ActionsLineGraph = (props) => {
const svgRef = React.useRef();
// will be called initially and on every data change
React.useEffect(() => {
const data = d3.csvParse(csvData);
console.log(data);
const parseTime = d3.timeParse("%Y-%m-%d");
const svg = d3.select(svgRef.current);
const from = parseTime(data[0].date);
const to = parseTime(data[data.length-1].date);
console.log('FROM: ', from);
console.log('TO: ', to);
const xScale = d3.scaleTime()
.domain([to, from])
.range([300, 0]);
const yScale = d3.scaleLinear().domain([0, 30]).range([150, 0]);
const xAxis = d3.axisBottom(xScale)
.ticks(data.length)
.tickFormat((index) => index + 1);
svg.select(".x-axis").style("transform", "translateY(150px)").call(xAxis);
const yAxis = d3.axisRight(yScale);
svg.select(".y-axis").style("transform", "translateX(300px)").call(yAxis);
// set the dimensions and margins of the graph
const margin = { top: 20, right: 20, bottom: 50, left: 70 },
width = 300 - margin.left - margin.right,
height = 150 - margin.top - margin.bottom;
// add X axis and Y axis
const x = d3.scaleTime().range([0, width]);
const y = d3.scaleLinear().range([height, 0]);
const path = data.reduce((path, item, index) => {
const x = xScale(parseTime(item.date));
const y = yScale(Number(item.added));
const point = `${x},${y}`;
return index === 0 ? `M ${point}` : `${path} L ${point}`;
}, null);
console.log('PATH: ', path);
svg.append("path")
.attr("class", "line")
.attr("fill", "none")
.attr("stroke", "steelblue")
.attr("stroke-width", 10)
.attr("d", path)
}, [svgRef]);
return (
<svg ref={svgRef}>
<g className="x-axis" />
<g className="y-axis" />
</svg>
);
}
ReactDOM.render(<ActionsLineGraph />, document.querySelector("#app"))
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.14.0/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.14.0/umd/react-dom.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/7.6.1/d3.min.js"></script>
<div id="app"></div> | unknown | |
d19825 | test | Try some thing like this.
array.map(snippet => (
<>
{snippet.question.map(que => {
if (que.type === "text") {
return <Typography>{que.value}</Typography>;
} else if (que.type === "number") {
return <Number>{que.value}</Number>;
}
return null;
})}
{snippet.answer.map(ans => {
if (ans.type === "text") {
return <Typography>{ans.value}</Typography>;
} else if (ans.type === "number") {
return <Number>{ans.value}</Number>;
}
return null;
})}
</>
)); | unknown | |
d19826 | test | You can do everything with a single (anonymous) function, as demonstrated below:
$("body").on("click","button",function() {
with ($(this))
$('#text').val(hasClass('sel')? prev('input').val():'')
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<!-- code 1 -->
<input id="code1" type="text" value="codeval1" disabled />
<button class="sel">Select</button>
<br>
<!-- code 2 -->
<input id="code2" type="text" value="codeval2" disabled />
<button class="sel">Select</button>
<input id="text" value="" type="text"></input>
<button class="clear" >clear selection</button>
I simply test, whether the class sel exists for the clicked button ($(this)) and then copy the value of the previous input field to the target. Otherwise I place an empty string into the target. My event binding is done on the <body> of your page so potentially all buttons will have this functionality. In a real example you would restrict the scope to a container (<div> or <form>) in which all buttons will have this functionality.
A: Simple solution
$('.sel').on('click', function(){
var value= $(this).prev('input').val();
$('#text').val(value);
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<!-- code 1 -->
<input id="code1" type="text" value="codeval1" disabled />
<button class="sel">Select</button>
<br>
<!-- code 2 -->
<input id="code2" type="text" value="codeval2" disabled />
<button class="sel">Select</button>
<input id="text" value="" type="text"></input>
<button class="clear" >clear selection</button> | unknown | |
d19827 | test | Your url looks to the controller action seems incorrect. You have var url = "/Admin/SaveLocationApiAsync/Post/"; when it should be var url = "/Admin/SaveLocationApiAsync";
Another approach to getting the correct url would be:
var url = '@Url.Action("SaveLocationApiAsync", "<ControllerName>")';
Also, in your ajax error handler you can get the HTTP status code and error message, which would help.
error: function (jqXHR, textStatus, errorThrown) {
swal("Error", "Something went wrong.\nPlease contact help.", "error");
}
EDIT:
I should have prefaced that using Url.Action works when your JavaScript is in a view (assuming Razor view in this case).
Fiddler is great tool to use when debugging ajax calls. | unknown | |
d19828 | test | Try this instead to only select the visible elements under the tbody:
$('tbody :visible').highlight(myArray[i]);
A: If you want to get the visible tbody elements, you could do this:
$('tbody:visible').highlight(myArray[i]);
It looks similar to the answer that Agent_9191 gave, but this one removes the space from the selector, which makes it selects the visible tbody elements instead of the visible descendants.
EDIT:
If you specifically wanted to use a test on the display CSS property of the tbody elements, you could do this:
$('tbody').filter(function() {
return $(this).css('display') != 'none';
}).highlight(myArray[i]);
A: Use like this:
if( $('#foo').is(':visible') ) {
// it's visible, do something
}
else {
// it's not visible so do something else
}
Hope it helps!
A: $('tbody').find('tr:visible').hightlight(myArray[i]);
A: As @Agent_9191 and @partick mentioned you should use
$('tbody :visible').highlight(myArray[i]); // works for all children of tbody that are visible
or
$('tbody:visible').highlight(myArray[i]); // works for all visible tbodys
Additionally, since you seem to be applying a class to the highlighted words, instead of using jquery to alter the background for all matched highlights, just create a css rule with the background color you need and it gets applied directly once you assign the class.
.highlight { background-color: #FFFF88; }
A: You can use the following code to test if display is equivalent to none:
if ($(element).css('display') === 'none' ){
// do the stuff
} | unknown | |
d19829 | test | You are referencing the same instance of the array in every object that you build.
userSelects[idx].excludedUsers = excludedUsers; does not copy the excludedUsers array into a new array for the object, it assigns a reference to the original array to userSelects[idx].excludedUsers.
If you want to clone the array you can use Array.slice() to do a shallow copy:
userSelects[idx].excludedUsers = excludedUsers.slice(0) | unknown | |
d19830 | test | Since JS is single threaded, one function would run to its entirety and then only the control can go to any other function.
Perhaps you have an async call such as an AJAX call that indeed calls a function asynchronously but still whatever callback you give to it, that function would again run to its entirety before passing control to the next function in queue.
So the question comes, what is the exact scenario which you want to implement?
Maybe you want to control which function gets executed depending on some condition. That you can do using another function probably
var runTDfn = true;
if(runTDfn){
ShowAction('Some title', 'Some artist', 'Some genre');
}
else {
SelectTD(0, 1, 'some color');
} | unknown | |
d19831 | test | You have not mentioned which the Azure IoT Hub scale tier is used. Basically there are two price groups such as Basic and Standard with a significant different cost and capabilities. The Basic tier offers only services for one-way communications between the devices and Azure IoT Hub.
Based on that, the following scenarios can be used for your business case:
1. Basic Tier (non event-driven solution)
The device pushs periodicaly a telementry and non-telemetry messages based on the needs to the Azure IoT Hub, where the non-telemetry messages are routed to the Azure Function via the Service Bus Queue/Topic. Responsibility for this non-telemetry pipe is to persist a real device state in the database. Note, that the 6M messages will cost only $50/month. The back-end application can any time to query this database for devices state.
2. Standard Tier (event-driven solution) In this scenario you can use a Device Twin of the Azure IoT Hub to enable storing a real-device state in the cloud-backend (described by @HelenLo). The device can be triggered by C2D message, changing a desired property, invoking a method or based on the device edge trigger to the action for updating a state (reported properties).
The Azure IoT Hub has a capabilities to run your scheduled jobs for multiple devices.
In this solution, the back-end application can call any time a job for ExportDevicesAsync to the blob storage, see more details here. Note, that the 6M messages will cost $250/month.
As you can see the above each scenario needs to build a different device logic model based on the communications capabilities between the devices and Azure IoT Hub and back. Note, there are some limitations for these communications, see more details here.
A: You can consider using Device Twin of IoT Hub
https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins
Use device twins to:
*
*Store device-specific metadata in the cloud. For example, the deployment location of a vending machine.
*Report current state information such as available capabilities and conditions from your device app. For example, a device is connected to your IoT hub over cellular or WiFi.
*Synchronize the state of long-running workflows between device app and back-end app. For example, when the solution back end specifies the new firmware version to install, and the device app reports the various stages of the update process.
*Query your device metadata, configuration, or state.
A: IoT Hub provides you with the ability to connect your devices over various protocols. Preferred protocols are messaging protocols, such as MQTT or AMQP, but HTTPS is also supported. Using IoT hub, you do not request data from the device, though. The device will send the data to the IoT Hub. You have to options to implement that with IoT Hub:
*
*The device connects to the IoT Hub whenever it has some data to be sent, and pushes the data up to IoT Hub
*The device does not send any data on its own, but stays always or at least regularly connected to IoT Hub. You then can send a cloud to device message over IoT Hub to the device, requesting the data to be sent. The device then sends the data the same way it would in the first option.
When the data then has been sent to IoT Hub, you need to push it somewhere where it is persistently stored - IoT Hub only keeps messages for 1 day by default. Options for this are:
*
*Create a blob storage account and push to that directly from IoT Hub using a custom endpoint This would probably be the easiest and cheapest. Dependening on how you need to access your data, a blob might not be the best option, though
*Create a function app, create a function with an EventHubTrigger, connect it to IoT Hub and let the function process incoming data by outputting it into any kind of data sink, such as SQL, CosmosDB, Table Storage... | unknown | |
d19832 | test | I ran through the same issue and after struggling for a couple of hours I went to my seller account and recreated my "Application Id" and "Application Secret". The only difference I made was I selected "self_access_application" instead of "third_party_application" this time and I was good to go.
Please refer: https://nimb.ws/sziWmA
Hope this helps
Thanks
A: You can try this code, i also faced the same issue.
$url = "https://api.flipkart.net/oauth-service/oauth/token?grant_type=client_credentials&scope=Seller_Api";
$curl = curl_init();
curl_setopt($curl, CURLOPT_USERPWD, config('constants.flipkart.application_id').":".config('constants.flipkart.secret_key'));
curl_setopt($curl, CURLOPT_URL,$url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
$result = curl_exec($curl);
$token = json_decode($result,true);
if(isset($token['access_token'])){
$this->access_token = $token['access_token'];
}
A: you can try this it will be helpful python/odoo developer
def flipkart_token_generation(self):
if not self.flipkart_sandbox_app_id or not self.flipkart_sandbox_cert_id:
raise UserError(_("Flipkart: cannot fetch OAuth token without credentials."))
else:
url = "https://sandbox-api.flipkart.net/oauth-service/oauth/token"
data = {'grant_type': 'client_credentials', 'scope': 'Seller_Api'}
response_json = requests.get(url, params=data, auth=(self.flipkart_sandbox_app_id, self.flipkart_sandbox_cert_id)).json()
self.env['ir.config_parameter'].sudo().set_param('flipkart_sandbox_token', response_json["access_token"]) | unknown | |
d19833 | test | I think the simplest way would be to write a web service (WCF could be used, for example) which returns the said URL to the other web site. The "request for the URL" would just be a web service call from the other web site to your web service.
A: Sounds like your best bet would be to create a web service that would be consumed by the other websites.
The MSDN site actually has a good overview and a couple of decent tutorials: ASP.NET Web Services | unknown | |
d19834 | test | if I understood you right you could do:
def train_swicth(cars, s_x):
i=0
s=[]
out=[]
for c in s_x:
if c=="s":
s.append(cars[i])
i+=1
elif c=="x":
out.append(s.pop())
return out
as lists can be used as stacks with append as push-operation | unknown | |
d19835 | test | As long as you have a reference to the JPanel, you can add whatever GUI-element you want, by calling add(JComponent comp) on the JPanel.
So, you can do something like this:
class Panel extends JPanel{
...
}
class Main{
public Main(JPanel thePanel){
thePanel.add(new JButton("Hello"));
}
}
Was this what you were looking for?
You can also update the fields added to the panel from another class, if you have a public accessor-method set up, in the class. So in your panel class, you have a method:
public JButton getButton(){
return button;
}
Then you can access the button from whatever class with a reference to your panel class, like this:
panel.getButton().setText("Some text");
Note that the button could just as well be public, then you could simply call the method directly: panel.button.setText("Some text"); but this is not considered good code, as it violates some general good OOP practices, not relevant to mention here. | unknown | |
d19836 | test | Try Process Monitor to see which path it tries to execute when it fails... | unknown | |
d19837 | test | You could try adding an animation listener to the animation. In the listener, there is onAnimationEnd() which gets called when the animation is done. Here, you may call succeeding animations such that they appear that they are chained.
Android Guide on Animation - Animation Listeners | unknown | |
d19838 | test | Yes it is possible to put an XPage in the sidebar without Composite Applications. What you need to do here is to go to File -> Preferences ->Widget Catalog and check the "Show widgets toolbar and My Widgets Panel". Now Open the XPage you want to create as a widget in XPiNC. In the toolbar, click on the "Configure a widget from the current context". Choose "Display as Panel" , click next and then the finish button. Your XPage should now display in the sidebar. Another way is to just click the "Display as Panel" button in the MyWidgets toolbar, this will also put your XPage in the sidebar. If you go to the MyWidgets sidebar Panel, you will be able to see your XPage Widget there and it is possible to export it as a widget to send to other users. Or use the widget catalog and deploy the widget to your users via a policy setting. | unknown | |
d19839 | test | From the Rack spec:
The Body must respond to each and must only yield String values. The Body itself should not be an instance of String, as this will break in Ruby 1.9.
In Ruby 1.8 Strings did respond to each, but that changed in 1.9.
The simplest solution would be to just return an array containing the string:
[status, headers, [response_body]] | unknown | |
d19840 | test | You can try this:
ListView1.Items.Add("Buah").SubItems.Add("Apel")
ListView1.Items.Add("Buah").SubItems.Add("Mangga")
ListView1.Items.Add("Buah").SubItems.Add("Jambu")
ListView1.Items.Add("Buah").SubItems.Add("Durian")
ListView1.Items.Add("Buah").SubItems.Add("Rambutan")
ListView1.Items.Add("Sayur").SubItems.Add("Apel")
ListView1.Items.Add("Sayur").SubItems.Add("Apel")
ListView1.Items.Add("Sayur").SubItems.Add("Apel")
Dim firstColDistinctItems() As String = ListView1.Items.Cast(Of ListViewItem).Select(Function(item As ListViewItem) item.Text).Distinct().ToArray()
For i = 0 To firstColDistinctItems.Count - 1
Dim repeatCount As Integer = ListView1.Items.Cast(Of ListViewItem).Where(Function(item As ListViewItem) item.Text = firstColDistinctItems(i)).Count
ListView2.Items.Add(firstColDistinctItems(i)).SubItems.Add(repeatCount)
Next
A: Although I haven't tried it, Youssef13's answer should work.
Here's a "more readable" routine that will do just what you want.
Hopefully, the comments are enough to understand how it works.
Private Sub CountItems()
' Delete any existing items in lv2
lv2.Items.Clear()
' A dictionary that will contain a list of all unique items from lv1 and their occurrence count
Dim items As New Dictionary(Of String, Integer)
' Simple For loop to scan all items in lv1
For Each itm As ListViewItem In lv1.Items
If items.ContainsKey(itm.Text) Then ' Check if the item has already been added
items(itm.Text) += 1 ' In that case, increment its counter
Else
items.Add(itm.Text, 1) ' Otherwise, create add it as a new item and set its internal counter to 1
End If
Next
' Finally, display the dictionary contents in lv2
For Each itm In items
lv2.Items.Add(itm.Key).SubItems.Add(itm.Value.ToString())
Next
End Sub
UPDATE: Here's a highly optimized (performance-wise) version:
Private Sub CountItems()
lv2.Items.Clear()
Dim items As IEnumerable(Of Tuple(Of String, Integer)) = lv1.Items.Cast(Of ListViewItem).GroupBy(Function(i) i.Text).Select(Function(i) Tuple.Create(i.Key, i.Count))
For Each itm In items
lv2.Items.Add(itm.Item1).SubItems.Add(itm.Item2.ToString())
Next
End Sub | unknown | |
d19841 | test | You can have it much more concise with list comprehension:
from fractions import gcd
print(" | 2 3 4 5 6 7 8 9 10 11 12 13 14 15")
print("-----------------------------------------------")
xlist = range(2,16)
ylist = range(2,51)
print("\n".join(" ".join(["%2d | " % b] + [("%2d" % gcd(a, b)) for a in xlist]) for b in ylist))
Output:
| 2 3 4 5 6 7 8 9 10 11 12 13 14 15
-----------------------------------------------
2 | 2 1 2 1 2 1 2 1 2 1 2 1 2 1
3 | 1 3 1 1 3 1 1 3 1 1 3 1 1 3
4 | 2 1 4 1 2 1 4 1 2 1 4 1 2 1
5 | 1 1 1 5 1 1 1 1 5 1 1 1 1 5
6 | 2 3 2 1 6 1 2 3 2 1 6 1 2 3
7 | 1 1 1 1 1 7 1 1 1 1 1 1 7 1
8 | 2 1 4 1 2 1 8 1 2 1 4 1 2 1
9 | 1 3 1 1 3 1 1 9 1 1 3 1 1 3
10 | 2 1 2 5 2 1 2 1 10 1 2 1 2 5
11 | 1 1 1 1 1 1 1 1 1 11 1 1 1 1
12 | 2 3 4 1 6 1 4 3 2 1 12 1 2 3
13 | 1 1 1 1 1 1 1 1 1 1 1 13 1 1
14 | 2 1 2 1 2 7 2 1 2 1 2 1 14 1
15 | 1 3 1 5 3 1 1 3 5 1 3 1 1 15
16 | 2 1 4 1 2 1 8 1 2 1 4 1 2 1
17 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1
18 | 2 3 2 1 6 1 2 9 2 1 6 1 2 3
19 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1
20 | 2 1 4 5 2 1 4 1 10 1 4 1 2 5
21 | 1 3 1 1 3 7 1 3 1 1 3 1 7 3
22 | 2 1 2 1 2 1 2 1 2 11 2 1 2 1
23 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1
24 | 2 3 4 1 6 1 8 3 2 1 12 1 2 3
25 | 1 1 1 5 1 1 1 1 5 1 1 1 1 5
26 | 2 1 2 1 2 1 2 1 2 1 2 13 2 1
27 | 1 3 1 1 3 1 1 9 1 1 3 1 1 3
28 | 2 1 4 1 2 7 4 1 2 1 4 1 14 1
29 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1
30 | 2 3 2 5 6 1 2 3 10 1 6 1 2 15
31 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1
32 | 2 1 4 1 2 1 8 1 2 1 4 1 2 1
33 | 1 3 1 1 3 1 1 3 1 11 3 1 1 3
34 | 2 1 2 1 2 1 2 1 2 1 2 1 2 1
35 | 1 1 1 5 1 7 1 1 5 1 1 1 7 5
36 | 2 3 4 1 6 1 4 9 2 1 12 1 2 3
37 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1
38 | 2 1 2 1 2 1 2 1 2 1 2 1 2 1
39 | 1 3 1 1 3 1 1 3 1 1 3 13 1 3
40 | 2 1 4 5 2 1 8 1 10 1 4 1 2 5
41 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1
42 | 2 3 2 1 6 7 2 3 2 1 6 1 14 3
43 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1
44 | 2 1 4 1 2 1 4 1 2 11 4 1 2 1
45 | 1 3 1 5 3 1 1 9 5 1 3 1 1 15
46 | 2 1 2 1 2 1 2 1 2 1 2 1 2 1
47 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1
48 | 2 3 4 1 6 1 8 3 2 1 12 1 2 3
49 | 1 1 1 1 1 7 1 1 1 1 1 1 7 1
50 | 2 1 2 5 2 1 2 1 10 1 2 1 2 5
This works in Python2 and Python3. If you want zeros at the beginning of each one-digit number, replace each occurence of %2d with %02d. You probably shouldn't print the header like that, but do it more like this:
from fractions import gcd
xlist = range(2, 16)
ylist = range(2, 51)
string = " | " + " ".join(("%2d" % x) for x in xlist)
print(string)
print("-" * len(string))
print("\n".join(" ".join(["%2d | " % b] + [("%2d" % gcd(a, b)) for a in xlist]) for b in ylist))
This way, even if you change xlist or ylist, the table will still look good.
A: Your problem is that the python print statement adds a newline by itself.
One solution to this is to build up your own string to output piece by piece and use only one print statement per line of the table, like such:
from fractions import gcd
print "| 2 3 4 5 6 7 8 9 10 11 12 13 14 15"
print "-----------------------------------"
xlist = range(2,16)
ylist = range(2,51)
for b in ylist:
output=str(b)+" | " #For each number in ylist, make a new string with this number
for a in xlist:
output=output+str(gcd(a,b))+" " #Append to this for each number in xlist
print output #Print the string you've built up
Example output, by the way:
| 2 3 4 5 6 7 8 9 10 11 12 13 14 15
-----------------------------------
2 | 2 1 2 1 2 1 2 1 2 1 2 1 2 1
3 | 1 3 1 1 3 1 1 3 1 1 3 1 1 3
4 | 2 1 4 1 2 1 4 1 2 1 4 1 2 1
5 | 1 1 1 5 1 1 1 1 5 1 1 1 1 5
6 | 2 3 2 1 6 1 2 3 2 1 6 1 2 3
7 | 1 1 1 1 1 7 1 1 1 1 1 1 7 1
8 | 2 1 4 1 2 1 8 1 2 1 4 1 2 1
9 | 1 3 1 1 3 1 1 9 1 1 3 1 1 3
A: You can specify what kind of character end the line using the end parameter in print.
from fractions import gcd
print("| 2 3 4 5 6 7 8 9 10 11 12 13 14 15")
print("-----------------------------------")
xlist = range(2,16)
ylist = range(2,51)
for b in ylist:
print(b + " | ",end="")
for a in xlist:
print(gcd(a,b),end="")
print("")#Newline
If you are using python 2.x, you need to add from __future__ import print_function to the top for this to work. | unknown | |
d19842 | test | you can use try something like
@model.GetType().Name | unknown | |
d19843 | test | Here's a base R option using max.col :
#Select the columns to check
cols <- grep('CHECK', names(mydf), value = TRUE)
#Compare the value
mat <- mydf[cols] == 'A001'
#Find the column name where the value exist in each row
mydf$result <- cols[max.col(mat)]
#If the value does not exist in the row turn to `NA`.
mydf$result[rowSums(mat) == 0] <- NA
mydf
# case id CHECK1 CHECK2 CHECK3 CHECK4 CHECK5 result
#1 1 10 A001 Z001 Z001 Z001 Z001 CHECK1
#2 2 11 B001 B001 B001 B001 B001 <NA>
#3 3 12 C001 C001 C001 A001 C001 CHECK4
A: As a supplement:
I think the long format is a nice option for this case.
Becase it's very clear to show the position "A001" and easy to filter even for more CHECK number.
I will use data.table as a demonstration.
library(data.table)
setDT(mydf)
dt <- melt(mydf, id = 1:2, measure.vars = patterns("CHECK*"))
dt
Long format
case id variable value
1: 1 10 CHECK1 A001
2: 2 11 CHECK1 B001
3: 3 12 CHECK1 C001
4: 1 10 CHECK2 Z001
5: 2 11 CHECK2 B001
6: 3 12 CHECK2 C001
7: 1 10 CHECK3 Z001
8: 2 11 CHECK3 B001
9: 3 12 CHECK3 C001
10: 1 10 CHECK4 Z001
11: 2 11 CHECK4 B001
12: 3 12 CHECK4 A001
13: 1 10 CHECK5 Z001
14: 2 11 CHECK5 B001
15: 3 12 CHECK5 C001
Filter A001
dt[value == "A001"]
case id variable value
1: 1 10 CHECK1 A001
2: 3 12 CHECK4 A001 | unknown | |
d19844 | test | The typical way of controlling visibility is to use the visibility attribute with a conditional statement, and then set the associated binding variable, such as,
in xml:
<Label class="label" text="Label Text" visibility="{{ showLabel ? 'visible' : 'collapsed' }}" />
in js:
viewModel.set("showLabel", "true");
If you really want to control the class, then you could do something like,
<Label class="{{ showLabel ? 'labelShow' : 'labelHide' }}" text="Label Text" />
This may be somewhat simpler than the approach you're taking now.
A: You are not suppose to use cssClasses property, the easiest way to do is by passing all your class names separated by space to className property.
Internally the framework listens for changes on className, parse it and store them in cssClasses set and trigger an UI update.
But in case if you think playing with className is hard, you would rather prefer using cssClasses then you should call the private method ._onCssStateChange() on the view instance to update UI. | unknown | |
d19845 | test | int i = 'd' - 'a';
will have i set to 3 which is the difference | unknown | |
d19846 | test | You need to declare your variables with a keyword such as var, let, const otherwise the variable becomes global. All boils down to a scoping issue.
let timeIni = Number($.now());
Here is the fiddle working: https://jsfiddle.net/s8650s18/22/ | unknown | |
d19847 | test | I was wondering if any other app can see this request
The app that responds to your startActivityForResult() call can see the request. That could be:
*
*The real unmodified app that you wish to send the data to
*A hacked version of that app
*A completely independent app that happens to match your Intent, particularly if you are using an implicit Intent
You can try to check signatures to confirm whether the other app is indeed the real unmodified app, so you avoid the latter two scenarios.
On older versions of Android, the Intent would be visible to other apps as part of the data backing the overview screen (recent-tasks list). That was cleared up somewhere in the 4.x series IIRC.
Those are the only two attacks that I know of for non-rooted devices. | unknown | |
d19848 | test | To have an NSDictionary-type collection where your keys are pointers, you might be needing an NSMapTable class.
From this link:
NSMapTable (as the name implies) is more suited to mapping in a
general sense. Depending on how it is constructed, NSMapTable can
handle the "key-to-object" style mapping of an NSDictionary but it can
also handle "object-to-object" mappings — also known as an
"associative array" or simply a "map".
A: Using NSMutableDictionary is bad idea. It copies keys, so your memory usage will increase dramatically. Use NSMapTable. You can configure it to use non-copyable keys and storing weak references to values, for example:
NSMapTable *mapTable = [NSMapTable mapTableWithKeyOptions:NSMapTableStrongMemory
valueOptions:NSMapTableWeakMemory]; | unknown | |
d19849 | test | Missed defer closeSession(session) in ReceiveData | unknown | |
d19850 | test | To create a folder inside your Internal Storage, try out this code snippet
val folder = filesDir
val f = File(folder, "folder_name")
f.mkdir()
Finally to check if the folder is created open Device Explorer in Android Studio, then follow the path
data->data->your app package name -> files-> here should be your folder that you created programmatically. Hope this helps
A:
I am new to Kotlin and have read lots of tutorials, tried bunches of code but still can't understand how to create a folder in internal storage.
It seems as though you really want to be creating a directory in external storage.
Since that is no longer being supported on Android 10 (by default) and Android R+ (for all apps), I recommend that you let the user create the directory themselves, and you get access to it via ACTION_OPEN_DOCUMENT_TREE and the Storage Access Framework.
When this app launches I go to a browser and don't see any new folder.
The root of external storage is Environment.getExternalStorageDirectory().
A: val appDirctory =File(Environment.getExternalStorageDirectory().path + "/test")
appDirctory.mkdirs()
A: Can you try this ?
class MainActivity() : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
var filename = "test.txt"
val folder = File("/sdcard/MetroPol/")
folder.mkdirs()
val outputFile = File(folder, filename)
try {
val fos = FileOutputStream(outputFile)
} catch (e: FileNotFoundException) {
e.printStackTrace()
}
} | unknown | |
d19851 | test | @kittyminky you right
the solution is
Accounts.ui.config({ requestPermissions: { facebook: [the_permissons_you_want] } });
A: on my case using Accounts.ui.config yields an error since i only have Accounts defined, Accounts.ui isn't defined for me.
Perhaps because i didn't add that package, but must be a way without using the .ui ? | unknown | |
d19852 | test | I found the solution.
I had the following code in my web.xml. I had to comment it out to get it working
<context-param>
<param-name>org.richfaces.LoadScriptStrategy</param-name>
<param-value>NONE</param-value>
</context-param>
A: This script error is because one of the fiji related js file is missing in your jsf page. Please add the following script in your xhtml/jsf page to resolve this issue:
<script type="text/javascript" src="/[your-application-context-name]/a4j/g/3_3_1.CR1com/exadel/fiji/renderkit/html/AC_OETags.js.jsf" />
Before testing this in your application, please check that you are able to view Javascript with your browser using the following URL:
http://[url]:8080/[your-application-context-name]/a4j/g/3_3_1.CR1com/exadel/fiji/renderkit/html/AC_OETags.js.jsf
or
http://[url]:8080/your-application-context-name/a4j/g/3_3_1.CR1com/exadel/fiji/renderkit/html/AC_OETags.js
If you are able to see the script content in your browser, your problem has been resolved, and there will be no need to comment the web.xml changes. | unknown | |
d19853 | test | I'm surprised that the .on works at all. According to the documentation http://api.jquery.com/on/ it's second parameter should be a selector string and not a jquery object.
I would try something like this:
$(document).on("change", "#<%= cboxFirst.ClientID %>", function () {
if ($(this).is(":checked")) {
ddlAge.attr("disabled", "disabled");
ddlAge.val("");
}
else { ddlAge.removeAttr("disabled"); }
});
As long as the ClientID stays the same the event will still work even when the UpdatePanel replaces all its content.
Also, .live has been deprecated in favor or .on | unknown | |
d19854 | test | You can use the pipes module:
The pipes module defines a class to abstract the concept of a pipeline — a sequence of converters from one file to another.
Sure, the syntax won't be the same as a shell pipe, but why reinvent the wheel?
A: You may be thinking of coroutines. Check out this very interesting presentation by David Beazley. | unknown | |
d19855 | test | Moles is capable of detouring calls to managed code. This class is clearly not dealing with managed code. Try creating a stub for this class, manually. This means crating an INativeMethods interface, have NativeMethods implement INativeMethods, and then use the interface as the stub, as usual. Moles will then generate stub type SINativeMethods from the interface, for use in test projects.
A: "Thus, if a method has no body (such as an abstract method), we cannot detour it." - Moles Dev | unknown | |
d19856 | test | There is also a "dumb" way of achieving the end goal, is to create a new table without the column(s) not wanted. Using Hive's regex matching will make this rather easy.
Here is what I would do:
-- make a copy of the old table
ALTER TABLE table RENAME TO table_to_dump;
-- make the new table without the columns to be deleted
CREATE TABLE table AS
SELECT `(col_to_remove_1|col_to_remove_2)?+.+`
FROM table_to_dump;
-- dump the table
DROP TABLE table_to_dump;
If the table in question is not too big, this should work just well.
A: suppose you have an external table viz. organization.employee as: (not including TBLPROPERTIES)
hive> show create table organization.employee;
OK
CREATE EXTERNAL TABLE `organization.employee`(
`employee_id` bigint,
`employee_name` string,
`updated_by` string,
`updated_date` timestamp)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'hdfs://getnamenode/apps/hive/warehouse/organization.db/employee'
You want to remove updated_by, updated_date columns from the table. Follow these steps:
create a temp table replica of organization.employee as:
hive> create table organization.employee_temp as select * from organization.employee;
drop the main table organization.employee.
hive> drop table organization.employee;
remove the underlying data from HDFS (need to come out of hive shell)
[nameet@ip-80-108-1-111 myfile]$ hadoop fs -rm hdfs://getnamenode/apps/hive/warehouse/organization.db/employee/*
create the table with removed columns as required:
hive> CREATE EXTERNAL TABLE `organization.employee`(
`employee_id` bigint,
`employee_name` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'hdfs://getnamenode/apps/hive/warehouse/organization.db/employee'
insert the original records back into original table.
hive> insert into organization.employee
select employee_id, employee_name from organization.employee_temp;
finally drop the temp table created
hive> drop table organization.employee_temp;
A: You cannot drop column directly from a table using command ALTER TABLE table_name drop col_name;
The only way to drop column is using replace command. Lets say, I have a table emp with id, name and dept column. I want to drop id column of table emp. So provide all those columns which you want to be the part of table in replace columns clause. Below command will drop id column from emp table.
ALTER TABLE emp REPLACE COLUMNS( name string, dept string);
A: ALTER TABLE emp REPLACE COLUMNS( name string, dept string);
Above statement can only change the schema of a table, not data.
A solution of this problem to copy data in a new table.
Insert <New Table> Select <selective columns> from <Old Table>
A: ALTER TABLE is not yet supported for non-native tables; i.e. what you get with CREATE TABLE when a STORED BY clause is specified.
check this https://cwiki.apache.org/confluence/display/Hive/StorageHandlers
A: After a lot of mistakes, in addition to above explanations, I would add simpler answers.
Case 1: Add new column named new_column
ALTER TABLE schema.table_name
ADD new_column INT COMMENT 'new number column');
Case 2: Rename a column new_column to no_of_days
ALTER TABLE schema.table_name
CHANGE new_column no_of_days INT;
Note that in renaming, both columns should be of same datatype like above as INT
A: Even below query is working for me.
Alter table tbl_name drop col_name
A: For external table its simple and easy.
Just drop the table schema then edit create table schema , at last again create table with new schema.
example table: aparup_test.tbl_schema_change and will drop column id
steps:-
------------- show create table to fetch schema ------------------
spark.sql("""
show create table aparup_test.tbl_schema_change
""").show(100,False)
o/p:
CREATE EXTERNAL TABLE aparup_test.tbl_schema_change(name STRING, time_details TIMESTAMP, id BIGINT)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1'
)
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION 'gs://aparup_test/tbl_schema_change'
TBLPROPERTIES (
'parquet.compress' = 'snappy'
)
""")
------------- drop table --------------------------------
spark.sql("""
drop table aparup_test.tbl_schema_change
""").show(100,False)
------------- edit create table schema by dropping column "id"------------------
CREATE EXTERNAL TABLE aparup_test.tbl_schema_change(name STRING, time_details TIMESTAMP)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1'
)
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION 'gs://aparup_test/tbl_schema_change'
TBLPROPERTIES (
'parquet.compress' = 'snappy'
)
""")
------------- sync up table schema with parquet files ------------------
spark.sql("""
msck repair table aparup_test.tbl_schema_change
""").show(100,False)
==================== DONE ===================================== | unknown | |
d19857 | test | You could use a separate object to define the validation messages:
export const validationMessages = {
required: 'Field is required'
}
// ...code
errors.firstName = validationMessages.required
Then, if you need to change the messages for required, just set the validationMessages.required, like this:
validationMessages.required = 'First name is required'
However, mutate an object with the validation messages isn't a good solution. I strongly recommend you to use a module called redux-form-validators.
With this module, you can easily override the validation message:
import { required, email } from 'redux-form-validators'
<Field
...
validate={required()} // Will use the default required message
>
<Field
...
validate={required({ message: 'Hey, the first name is required!' })} // Will override the default message
/> | unknown | |
d19858 | test | SELECT emp_id, name, MIN(log) as log
FROM table_name
GROUP BY emp_id, name; | unknown | |
d19859 | test | Simply use
SELECT pg_get_triggerdef(oid)
FROM pg_trigger
WHERE tgname = trigger_name_in;
Besides, never use string concatenation when composing SQL code. The danger of SQL injection is too great. Use the format() function with the %I placeholder. | unknown | |
d19860 | test | If you want it to look like a single commit, you'll need to use git squash. Is there a reason why you can't use this? | unknown | |
d19861 | test | As a best practice, you should not scan and parse poms neither in remote nor local repository. On maven central they already scanned and parsed for you.
Just download nexus-maven-repository-index.gz from index dir (you need that big file 700M length, other files named nexus-maven-repository-index.XXX.gz are incremental updates)
Then use Maven Indexer to unpack index, maven indexer is available as java library and CLI program
As a result of running Maven indexer you'll get ordinary Apache Lucene index, with abitility to update it incrementally.
Here is a documentation, which explains how to unpack index and query data from it.
Most probably, index contain all the data you need.
A: For people still having the same question, I have developed a much simpler way to extract maven index indexes that works for most Nexus-based Maven repositories: the Maven Index Exporter.
From there you could simply get a list of poms and download them, if that's what you aim for.
Note however that it's huge: roughly 20 million documents are indexed for Maven Central and the text export is 14GB. There are as of today approximately 6.5 million pom files on Maven Central. | unknown | |
d19862 | test | Acepted answer is data warehouse aproach with creation of separate tables and relations to time table. That's bit too long for me.
But if you can do it easily like this:
Calculate duration using DATEDIFF formula:
time_in_sec= DATEDIFF([time_end];[time_start];SECOND)
Than you can use that to calculate your 3 custom columns for hours, minutes and seconds using this:
For hours:
duration_hh = if([time_in_sec]>0;FLOOR( [time_in_sec] / 3600; 1);0)
For minutes:
duration_mm = if([time_in_sec]>0;FLOOR( MOD( [hpm_trajanje_poziva_sec]; 3600) / 60; 1);0)
For seconds:
hpm_trajanje_poziva_ss = if([hpm_trajanje_poziva_sec]>0; FLOOR( MOD ( MOD( [hpm_trajanje_poziva_sec]; 3600) ;60 ); 1) ;0)
Than you can use those 3 calculated columns to filter your data in visuals.
A: The best approach would be to create a new time dimension, link it to your time fields and then use that to apply the filter rather than filtering those fields directly.
Here's a helpful guide giving a couple of was to apply this. | unknown | |
d19863 | test | Found a work-around answer. You can set the dockerEntrypoint like so:
// build.sbt
dockerEntrypoint := Seq("bin/myapp", "-Dconfig.file=conf/application.prod.conf", "-Dlogger.resource=logback.prod.xml")
A: javaOptions can be supplied to sbt-native-packager with
javaOptions in Universal ++= Seq(
// -J params will be added as jvm parameters
"-J-Xmx2048m",
"-J-Xms256m"
)
Note that these options will be applied to all generated packages (Debian, Rpm, etc.), not just Docker. See the discussion here. | unknown | |
d19864 | test | If you set your database up properly you can just do this info.id; in your onContextItemSelected and that gives the database id | unknown | |
d19865 | test | try:
echo $articles[0]["dates"];
A: foreach($returned_content->find('div.box-inset') as $article) {
$item['dates'] = $article->find('div.important-dates', 0)->plaintext;
$articles[] = $item['dates'];
}
you cannot use echo to output array
foreach($articles as $strarticle)
{
echo $strarticle;
} | unknown | |
d19866 | test | Yes and no. Generally, this is a common pattern:
// create the object, retain count 1
MyObject *myObject = [[MyObject alloc] init];
// increment the retain count in the setter
self.myObjectProperty = myObject;
// let go of the object before the end of the current method
[myObject release];
You can avoid the release, sort of, by using autorelease pools. More accurately, you indicate that you want the object to be released soon:
MyObject *myObject = [[[MyObject alloc] init] autorelease];
self.myObjectProperty = myObject;
// all done!
With many of the Apple-provided classes, you can use class methods other than alloc/init to get objects that are already autoreleased. Your example could be rewritten as:
MyObject *myObject = [[MyObject alloc] init];
myObject.myString = [NSMutableString stringWithFormat:@"bla"];
A final note: -retainCount is a blunt object. Particularly with NSStrings and other built-in classes, it may return results that are quite different from what you expect. Generally you should avoid it. | unknown | |
d19867 | test | You should use not "Keyboard shortcut" but rather "Mouse shortcut" from a popup menu (number 2 at the picture):
https://developer.android.com/studio/images/intro/keymap-options_2-2_2x.png
Also by default on most linux desktop environments alt+mouse click is already assigned to window dragging. OS shortcuts have more priority. If it's the case for you then either use different shortcut in Android Studio or reassign OS shortcut (in this case Unity): https://askubuntu.com/questions/521423/how-can-i-disable-altclick-window-dragging
After that you should be able to use the shortcut in Android Studio | unknown | |
d19868 | test | You can do it with CSS3, is not necessary javascript for that:
https://jsfiddle.net/rzcdqh8k/
.animation {
width: 100px;
height: 100px;
background-color: red; //original color
-webkit-animation-name: example;
-webkit-animation-duration: 3s;
-webkit-animation-iteration-count: infinite;
animation-name: example;
animation-duration: 3s;
animation-iteration-count: infinite;
}
/* Chrome, Safari, Opera */
@-webkit-keyframes example {
from {background-color: red;} //original color
to {background-color: white;}
}
/* Standard syntax */
@keyframes example {
from {background-color: red;} //original color
to {background-color: white;}
} | unknown | |
d19869 | test | @Bsharp Sadly TagHelpers are only an ASP.NET Core feature and wont work in non-core versions of MVC. | unknown | |
d19870 | test | Months in cron expressions are 1-based. That's why 0 37 17 * 4 ? 2012 is never executed: today is 10th of May and you want it to run on every day of April. When you remove the year it prints next scheduled date in 2013, but in April! myJobKey will run at: Mon Apr 01 18:16:00 EDT 2013.
Obviously your expression should be:
0 37 17 * 5 ? 2012
or to avoid confusion in the future:
0 37 17 * May ? 2012 | unknown | |
d19871 | test | Wild guess: You added the column after running the app once.
If so (i have tried it before its working fine !), just unistall and reinstall your app.
OR you can simply increment your DATABASE_VERSION constant.
[EDIT]
But the second method won't work, since your current onUpgrade() method is buggy.
db.execSQL("DROP TABLE IF EXISTS" + Constants.TABLE_NAME);
Won't delete the table. And so, it won't be recreated, even.
You need to insert a space before the table name:
db.execSQL("DROP TABLE IF EXISTS " + Constants.TABLE_NAME); | unknown | |
d19872 | test | The Windows key is covered by VK_LWIN and VK_RWIN, respectively the left and the right key.
The "meta" key is presumably the one that brings up the context menu for the active window, same one you'd see if you right-click the mouse. It is VK_APPS. Beware that it is not a modifier key. | unknown | |
d19873 | test | if you know your table location in hdfs. This is the most quick way without even opening the hive shell.
You can check you table location in hdfs using command;
show create table <table_name>
then
hdfs dfs -ls <table_path>| sort -k6,7 | tail -1
It will show latest partition location in hdfs
A: You can use "show partitions":
hive -e "set hive.cli.print.header=false;show partitions table_name;" | tail -1 | cut -d'=' -f2
This will give you "2016-03-09" as output.
A: If you want to avoid running the "show partitions" in hive shell as suggested above, you can apply a filter to your max() query. That will avoid doing a fulltable scan and results should be fairly quick!
select max(ingest_date) from db.table_name
where ingest_date>date_add(current_date,-3) will only scan 2-3 partitions.
A: It looks like there is no way to query for the last partition via Hive (or beeline) CLI that checks only metadata (as one should expect).
For the sake of completeness, the alternative I would propose to the bash parsing answer is the one directly querying the metastore, which can be easily extended to more complex functions of the ingest_date rather than just taking the max. For instance, for a MySQL metastore I've used:
SELECT MAX(PARTITIONS.PART_NAME) FROM
DBS
INNER JOIN
TBLS ON DBS.DB_ID = TBLS.DB_ID
INNER JOIN
PARTITIONS ON TBLS.TBL_ID = PARTITIONS.TBL_ID
PARTITIONS DBS.NAME = 'db'
PARTITIONS TBLS.TBL_NAME = 'my_table'
Then the output will be in format partition_name=partition_value. | unknown | |
d19874 | test | Read Write Execute Plain Dir Filename ./script: line 7: syntax error near unexpected token done' ./script: line 7: done'
Its because, you need a ; before do.
Bash scans from top to down, and executes every line. So in the top few lines, Bash does not know about FileExists and PrintFileName. So what you'd need to do is put the declarations before calling them.
function FileExists
{
...
}
function IsReadable
{
...
}
// more functions..
//Iterate here and call the above functions.
Cleaner way of iterating:
for var in "$@"
do
FileExists $var
PrintFileName $var
done
You might have problems with formatting because echo spits out a newline; and you might just not get things in a single line. use printf instead, and manually write out printf "\n" manually.
Also, @devnull points out, fi is missing in every single instance of an if block.
A: while "function Name () " syntax works, I prefer the style returned by declare -f Name as my written form, since I use "declare -f name ..." to reproduce function bodies.
also, you might factor the "echo Y" and "echo N" from each function, simply returning the truth of the assertion. so, ...IsReadable, .. become:
IsReadable ()
{
test -r $1
}
and used
IsReadable $1 && echo Y || echo N
since I don't find the "&&" (AND) and the "||" (OR) syntax too noisy. Also, i prefer this
[[ -r $1 ]] && echo Y || echo N
So, my isreadable:
isreadable () { [[ test -r $1 ]] ; }
since i allow one-line exceptions to the "declare -f" rule, and even have a function, fbdy which does
that: if the function body (less header, trailer) fits on one line, show it as a one-liner, otherwise, show it as the default.
Good to see you using functions. Keep it up. I mightily encourage their use. | unknown | |
d19875 | test | You should code:
let persons = [...this.state.persons]
persons[0].name= "updated name"
this.setState({ persons })
A: using dot operator we can achieve this.
let persons = [...this.state.persons]
persons[0].name= "updated name"
this.setState({ persons })
A: Proble with that you mutate component state, below it's example of immutable changing, I recommend you to read articles about how to change react state. Or you can try to use Mobx because it supports mutability
changePerson(index, field, value) {
const { persons } = this.state;
this.setState({
persons: persons.map((person, i) => index === i ? person[field] = value : person)
})
}
// and you can use this method
this.changePerson(0, 'name', 'newName')
A: this.setState(state => (state.persons[0].name = "updated name", state))
A: Assuming some conditional check to find the required person.
const newPersonsData = this.state.persons.map((person)=>{
return person.name=="name1" ? person.name="new_name" : person);
//^^^^^^^ can be some condition
});
this.setState({...this.state,{person:[...newPersonsData ]}});
A: I think the best way is to copy the state first in temporary variable then after update that variable you can go for setState
let personsCopy = this.state.persons
personsCopy [0].name= "new name"
this.setState({ persons: personsCopy })
A: Here's how I would do it.
See if that works for you.
class App extends React.Component {
constructor(props) {
super(props);
this.state = {
persons: [
{ name: "John", age: 24 },
{ name: "Ram", age: 44 },
{ name: "Keerthi", age: 23 }
],
status: "Online"
};
this.changeName = this.changeName.bind(this);
this.changeAge = this.changeAge.bind(this);
}
changeName(value,index) {
this.setState((prevState)=>{
const aux = prevState.persons;
aux[index].name = value;
return aux;
});
}
changeAge(value,index) {
this.setState((prevState)=>{
const aux = prevState.persons;
aux[index].age = value;
return aux;
});
}
render() {
const personItems = this.state.persons.map((item,index)=>
<React.Fragment>
<input value={item.name} onChange={(e)=>this.changeName(e.target.value,index)}/>
<input value={item.age} onChange={(e)=>this.changeAge(e.target.value,index)}/>
<br/>
</React.Fragment>
);
return(
<React.Fragment>
{personItems}
<div>{JSON.stringify(this.state.persons)}</div>
</React.Fragment>
);
}
}
ReactDOM.render(<App/>, document.getElementById('root'));
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.8.3/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.8.3/umd/react-dom.production.min.js"></script>
<div id="root"></div> | unknown | |
d19876 | test | make was treating the object files as intermediates and deleting them accordingly. Adding:
.SECONDARY: $(OBJS)
solved the problem. I do not know why it was doing this the first invocation but not the second invocation. Comments are welcome.
A: The reason that the .o files are not present is that they're considered intermediate files so make deletes them. However, that shouldn't cause any problems in your build, because as long as make can envision the intermediate file it will realize it doesn't need to be rebuilt if its prerequisites are older than its parents (in this case, as long as prog1 is newer than prog1.cpp for example).
I was not able to reproduce your experience with the second build rebuilding everything. More details will be needed. The output you showed is not interesting because that's just saying that make does NOT need to rebuild the .o file (it's newer than the prerequisite). You need to find the lines in the output that explain why make does need to rebuild the .o file. If you provide that info we may be able to help.
Just a couple of comments on your makefile: first, I don't think it's a good idea to force the mkdir rule to always succeed. If the mkdir fails you WANT your build to fail. Probably you did this so it would not be a problem if the directory already exists, but that's not needed because the mkdir -p invocation will never fail just because the directory exists (but it will fail if the directory can't be created for other reasons such as permissions). Also you can combine those into a single rule with multiple targets:
$(BINDIR) $(OBJDIR):
@mkdir -p $@
Next, you don't need the semicolons in your command lines and in fact, adding them will cause your builds to be slightly slower.
Finally, a small nit, but the correct order of options in the compile line is -c -o $@ $<; the source file is not (this is a common misconception) an argument to the -c option. The -c option, like -E, -s, etc. tells the compiler what output to create; in the case of -c it means compile into an object file. Those options do not take arguments. The filename is a separate argument. | unknown | |
d19877 | test | SQL Server allows you to add datetime values, so that is a convenient data type for this purpose.
It is easy to convert the date to a datetime -- it is in a standard format.
The time column is tricker, but you can add in ':' for the conversion:
select v.*,
convert(datetime, v.date) + convert(datetime, stuff(stuff(time, 5, 0, ':'), 3, 0, ':'))
from (values ('20210401', '121012')) v(date, time);
A: convert(datetime, DATES + ' ' + substring(TIMES, 1, 2) + ':' + substring(TIMES, 3, 2) + ':' + substring(TIMES, 5, 2) , 121) | unknown | |
d19878 | test | If you are in Excel, right click on the row numbers on the left to show a context menu, then click on "Insert". This will add a new row. | unknown | |
d19879 | test | That would almost certainly be achieved using something like posix_memalign.
A: Since 4Kbytes is often the size of a page (see sysconf(3) with _SC_PAGESIZE or the old getpagesize(2) syscall) you could use mmap(2) syscall (which is used by malloc and posix_memalign) to get 4Kaligned memory.
A: you can not allocate physically contiguous memory in user space. Because in User space kernel always allocates memory from highmem zone. But if you are writing a kernel module or a system space code then you can use _get_page() or _get_pages(). | unknown | |
d19880 | test | Firstly, assuming the newline is 100% spurious, I would figure out where it is coming from, and remove it there. But if for some reason that's not an option, the following gsub would work:
self.token = str.gsub(/\n$/, "")
That will only remove a newline if it's the last entry in the string. To remove all newlines, use:
self.token = str.gsub(/\n/, "")
Even easier, the rstrip method will remove trailing whitespace from a string:
self.token = str.rstrip | unknown | |
d19881 | test | if(isset($_POST["add"])){
this will work for you
your form has POST method. So on PHP side you have to handle it with $_POST global variable. | unknown | |
d19882 | test | When these checkboxes are added you don't want to go the server but when user presses submit you are anyways going to server, so at that time you can persist this information about new checkboxes on server.
Another option is to call to server asynchronously using AJAX to update the server about the state change.
A: If you still want to store the changes server-side, you can do so quietly in the background.
Just use XmlHttpRequest(), together with a PHP script.
A: You can use the HTML5 local storage API to store your changes.
localStorage.setItem('favoriteflavor','vanilla');
A: If you want session related information to pass thru multiple requests, you can either put the data in a cookie, or with php you can use sessions.
http://php.net/manual/en/features.sessions.php | unknown | |
d19883 | test | Wrap your entire layout in a <ScrollView>, it just gets truncated because the screen isn't 'high' enough. If you layout would be bigger in height, the first screen would be cut off too. | unknown | |
d19884 | test | If you need headless chrome in your container, choose a container with node14 and headless chrome installed. I found this one with Chrome 89 and released 10 months ago. You could find better source I guess
Else, you can use your node:14 container and, if it's possible, install headless Google Chrome on it (something like that, install work, but I haven't node file to test on it to validate completely that example)
- name: node:14
entrypoint: bash
args:
- -c
- |
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
apt update && apt install -y libappindicator1 fonts-liberation ./google-chrome-stable_current_amd64.deb
npm run test -- --watch=false | unknown | |
d19885 | test | If you're seeing different response it means that
*
*Either you're sending a different request. In this case inspect request details from JMeter and from the real browser using a 3rd-party sniffer tool like Fiddler or Burp, identify the inconsistencies and amend your JMeter configuration so it would send exactly the same request as the real browser does (apart from dynamic values which need to be correlated)
*Or one of the previous requests fails somewhere somehow, JMeter automatically treats HTTP responses with status codes below 400 as successful, it might be the case they are not really successful, i.e. you're continuously hitting the login page (you can check it by inspecting response data tab of the View Results Tree listener). Try adding a Response Assertions to the HTTP Request samplers so there will be another layer of explicit checks of the response data, this way you will get confidence that JMeter is doing what it is supposed to be doing. | unknown | |
d19886 | test | The jfxmobile plugin allows changing the path where the apk will be created.
Use installDirectory:
jfxmobile {
downConfig {
version = '3.0.0'
plugins 'display', 'lifecycle', 'statusbar', 'storage'
}
android {
installDirectory = file('/full/path/of/custom/folder')
manifest = 'src/android/AndroidManifest.xml'
}
}
Be aware that the folder should exist before running android task. Currently the plugin manages that for the default installation folder (removing it, and the apk, if exists and creating it again on every run). So you have to do it yourself, otherwise the task will skip it.
EDIT
The list of global variables that are intended to be modified if necessary are here, but the full list of variables currently included in the plugin can be found in the plugin source code.
Variables like installDirectory are used internally by the plugin and they are initialized with a default value, perform some actions like deleting the previous directory and creating it again (so Gradle performs the task). In case of overriding, these actions won't be executed, so you should take care of that yourself (or create a task for that).
A: This works for the standard android plugin to change the directory of the generated APKs:
android {
applicationVariants.all { variant ->
variant.outputs.each { output ->
output.outputFile = file("/some/dir/" + variant.name + "/" + archivesBaseName + ".apk")
}
}
} | unknown | |
d19887 | test | To see if object is selected:
if($(".foo").is(':focus')){
// do something
}
To change values on keypress:
$(".foo").bind("keypress", function(e){
$(this).attr('value', 'bar');
})
Though not sure what you mean by changing the values of a drop down, or why you'd want to do that. | unknown | |
d19888 | test | FetchTask directly fetches data, whereas Mapreduce will invoke a map reduce job
<property>
<name>hive.fetch.task.conversion</name>
<value>minimal</value>
<description>
Some select queries can be converted to single FETCH task
minimizing latency.Currently the query should be single
sourced not having any subquery and should not have
any aggregations or distincts (which incurrs RS),
lateral views and joins.
1. minimal : SELECT STAR, FILTER on partition columns, LIMIT only
2. more : SELECT, FILTER, LIMIT only (+TABLESAMPLE, virtual columns)
</description>
</property>
Also there is another parameter hive.fetch.task.conversion.threshold which by default in 0.10-0.13 is -1 and >0.14 is 1G(1073741824)
This indicates that, If table size is greater than 1G use Mapreduce instead of Fetch task
more detail | unknown | |
d19889 | test | Containers run at the same performance level as the host OS. There is no process performance hit. I created a whitepaper with Docker and HPE on this.
You wouldn't use pm2 or nodemon, which are meant to start multiple processes of your node app and restart them if they fail. That's the job of Docker now.
If in Swarm, you'd just increase the replica count of your service to be similar to the number of CPU/threads you'd want to run at the same time in the swarm.
I don't mention the nodemon/pm2 thing for Swarm in my node-docker-good-defaults so I'll at that as an issue to update it for. | unknown | |
d19890 | test | Replace your .htaccess code with this:
RewriteEngine On
RewriteRule ^$ /articles/ [L,R=301]
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME}\.php -f
RewriteRule ^([^/.]+)/?$ $1.php [L]
RewriteRule ^articles/([a-z0-9-]+)/([0-9]+)/?$ articles.php?id=$2&desc=$1 [L,QSA,NC]
You must also add this just below <head> tag of your page's HTML:
<base href="/" />
so that every relative URL is resolved from that base URL and not from the current page's URL. | unknown | |
d19891 | test | Seems the problem was coming from the following gem:
gem "rspec-legacy_formatters", :group => [:development, :test]
Not sure why I added it to the gem file. I removed it and did a
$ bundle install
which solved the problem | unknown | |
d19892 | test | A PHP driver (windows only) exists for NexusDB, currently it supports up to PHP v5.x. PHP v7.x support is being worked on.
Other drivers are either ODBC or Ado.NET. Perhaps say more about what you are trying to do? | unknown | |
d19893 | test | It now works, I am not sure what changed to fix this other than the fact i have placed the commands in a function this time, but it is all working as desired. | unknown | |
d19894 | test | What you did wrong is that you used gsub!. That takes a string and changes the string. It doesn't turn it into anything else, no matter what you do (even if you convert it to a float in the middle).
A simple way to achieve what you want is:
[["My", "2"], ["Cute"], ["Dog", "4"]].map{|s1, s2| [s1, *(s2.to_f if s2)]}
If you do not want to create the element array, but replace its contents, then:
[["My", "2"], ["Cute"], ["Dog", "4"]].each{|a| a[1] = a[1].to_f if a[1]}
If the numerical strings appear in random positions, then:
[["My", "2"], ["Cute"], ["Dog", "4"]]
.each{|a| a.each.with_index{|e, i| a[i] = a[i].to_f if a[i] and a[i] =~ /\d+/}} | unknown | |
d19895 | test | Please try the following code. This code, will remove your div node from it's parent. div will be moved into body tag.
dojo.require("dijit.Dialog");
var myDialog=new dijit.Dialog(
{
title:"Dialog Title",
content:dojo.byId("divNodeID")
}
);
myDialog.show();
Hope this helps you. Thanks!
A: If you surround your with the HTML that will create a Dialog, it should work.
For example, if your code is:
<form>
... some HTML ...
</form>
then consider coding:
<div data-dojo-type="dijit/Dialog" data-dojo-id="myDialog" title="MyTitle">
<form>
... some HTML ...
</form>
</div> | unknown | |
d19896 | test | def <<( rating ):
In your example, this is used to add a rating to a rateable model. (E.g. in acts_as_rateable.rb:41), similar to appending something to a string (str << "abc"). As it is within a module, it will only be included for the models that you declare as rateable.
class << ClassName:
All the methods inside of this block will be static / class methods (see this blog entry). (In this case, all the models will have the methods Model.example_static_method.)
A: Nearly all operators in Ruby are actually instance methods called on the object preceding them.
There are many different uses for << depending on the object type you're calling it on. For example, in an array this works to push the given value onto the end of the array.
It looks like this is for a Rails model object, so in that case I would say that this is an auxiliary method called when you append a model object to model object collection. For example, in this case you might be appending a Rating to a Product.
If you showed the whole method definition and showed what class it's in, I could provide a more specific answer. | unknown | |
d19897 | test | Maybe something like this:
<?xml version='1.0' encoding="UTF-8"?>
<document xmlns:xi="http://www.w3.org/2001/XInclude">
<p>Text of my document</p>
<xi:include href="copyright.xml"/>
</document>
https://en.wikipedia.org/wiki/XInclude
A: Below example should work to include external file:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "testng-1.0.dtd"
[<!ENTITY parent SYSTEM "./src/test/resources/dependencies.xml">]>
<suite name="suite1">
<test name="Managetestss" preserve-order="true">
&parent;
<parameter name="browser" value="chrome"></parameter>
<classes>
<class name="com.az.tests.commun.TNR_Client_001_Ajouter_Client" />
<class name="com.az.tests.client.TNR_Client_002_Rechercher_Client"
</classes>
</test>
</suite>
However the dependencies.xml you provided will not work because testng DTD doesn't support group tag under groups. Refer groups-of-groups and dependencies-in-xml | unknown | |
d19898 | test | To find out how the query is executed, don't use EXPLAIN but EXPLAIN QUERY PLAN:
explain query plan select * from Shares where toId=3 and fromId=3 order by time desc;
0|0|0|SCAN TABLE Shares USING INDEX Shares_time_toId_fromId
In this query, the toId and fromId values are read from the index, but this does not matter because the actual table has to be read anyway to get the shareId value.
If the query did not try to read the shareId column, or if the shareId column had type INTEGER so that it would be an alias for the rowid and thus be part of the index, the separate table lookup step would not be needed.
(Note: the latest version of the sqlite3 tool formats the EXPLAIN output better.) | unknown | |
d19899 | test | From what I see in your question, you'll need to set_index():
df
date close
0 1980-12-12 28.75
1 1980-12-15 27.25
2 1980-12-16 25.25
3 1980-12-17 25.87
4 1980-12-18 26.63
5 1980-12-19 28.25
6 1980-12-22 29.63
7 1980-12-23 30.88
8 1980-12-24 32.50
9 1980-12-26 35.50
df['date'] = pd.to_datetime(df['date'])
df.set_index('date', inplace=True)
df.reindex(dates)
df
close
date
1980-12-12 28.75
1980-12-15 27.25
1980-12-16 25.25
1980-12-17 25.87
1980-12-18 26.63
1980-12-19 28.25
1980-12-22 29.63
1980-12-23 30.88
1980-12-24 32.50
1980-12-25 NaN
1980-12-26 35.50
You need to set index so it knows how to align your new index. Is this your expected output? | unknown | |
d19900 | test | Add a cache property with the value false to the objects you pass to jQuery.ajax.
$.ajax({
url: "AJAXHandler.ashx",
cache: false,
data: { "lt": "loadcontrol" },
dataType: "html",
success: function(data) {
content.html(data);
}
});
You can set this globally by:
jQuery.ajaxSetup({
cache: false
});
jQuery will used cached responses for anything except SCRIPT/ JSONP by default.
A: If your btnLoad is inside the user control you're reloading, you need to do this instead:
$("#btnLoad").live('click', function() {
When loading ajax, you're replacing the elements, and any event handlers bound directly to them. If you use .live() instead, it won't be destroyed as part of the callback...it works differently and the event handler lives higher in the DOM, so it doesn't get blown aware with the button itself. | unknown |